CN114935977A - Spatial anchor point processing method and device, electronic equipment and storage medium - Google Patents

Spatial anchor point processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114935977A
CN114935977A CN202210744786.2A CN202210744786A CN114935977A CN 114935977 A CN114935977 A CN 114935977A CN 202210744786 A CN202210744786 A CN 202210744786A CN 114935977 A CN114935977 A CN 114935977A
Authority
CN
China
Prior art keywords
anchor point
space
dimensional
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210744786.2A
Other languages
Chinese (zh)
Other versions
CN114935977B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN202210744786.2A priority Critical patent/CN114935977B/en
Publication of CN114935977A publication Critical patent/CN114935977A/en
Application granted granted Critical
Publication of CN114935977B publication Critical patent/CN114935977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for processing spatial anchor points, electronic equipment and a storage medium. According to the technical scheme provided by the embodiment of the application, each space anchor point is rendered in the target three-dimensional real scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system obtained through coordinate conversion and the style information and the fixed pixel value corresponding to each space anchor point in the space anchor point rendering file. Because the rendering space anchor point is based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, each space anchor point is endowed with a fixed pixel value in the rendering file of the space anchor point, the space anchor point is rendered on the screen based on the two-dimensional coordinate of the space anchor point in the screen coordinate system and the fixed pixel value of the space anchor point, and thus, when the space anchor point is subjected to picture amplification operation, the pixel value of the space anchor point is not changed, the definition of the space anchor point cannot become fuzzy along with the amplification of the space anchor point, and the use experience of a user is ensured.

Description

Spatial anchor point processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of VR technologies, and in particular, to a method and an apparatus for processing spatial anchor point, an electronic device, and a storage medium.
Background
With the development of Virtual Reality (VR) technology, Virtual Reality technology and spatial anchor point technology are currently widely applied to some application scenarios based on a three-dimensional real scene space, such as VR online car watching scenarios. In a VR scene, a spatial anchor point may specifically and vividly position detail information of a target object existing in a physical space at a corresponding position in a three-dimensional real scene space corresponding to the physical space.
The spatial anchor point is shown in a specific form in the three-dimensional real scene space, for example, the spatial anchor point can be displayed on a graphical user interface in a circular, rectangular or mixed graph form; and operating the spatial anchor point, and displaying the detailed information of the target object associated with the spatial anchor point in the three-dimensional live-action space.
In the prior art, a spatial anchor point is generated based on the world coordinates of the spatial anchor point through Canvas technology, and the name of Canvas is called Canvas, which is a container for all UI components in a game. However, when the spatial anchor generated by the Canvas technology is displayed in the three-dimensional live-action space, if the spatial anchor is subjected to a picture amplification operation, the spatial anchor is unclear, which affects the user experience.
Disclosure of Invention
In order to solve or improve the problems in the prior art, embodiments of the present application provide a method and an apparatus for spatial anchor point processing, an electronic device, and a storage medium.
In an embodiment of the present application, a spatial anchor point processing method is provided. The method comprises the following steps: receiving a space anchor point generation request initiated aiming at the target three-dimensional real-scene space, wherein the space anchor point generation request comprises three-dimensional space coordinates of each space anchor point in the target three-dimensional real-scene space; based on the conversion relation between the world coordinate system where the target three-dimensional live-action space is located and the screen coordinate system, carrying out coordinate transformation on the three-dimensional space coordinates of each space anchor point in the target three-dimensional live-action space to obtain two-dimensional coordinates of each space anchor point in the screen coordinate system; and rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and a space anchor point rendering file corresponding to the target three-dimensional real-scene space, wherein the space anchor point rendering file at least comprises style information corresponding to each space anchor point and a corresponding fixed pixel value.
In another embodiment of the present application, a spatial anchor point processing apparatus is provided. The device, comprising: the receiving module is used for receiving a space anchor point generation request initiated aiming at the target three-dimensional real scene space, wherein the space anchor point generation request comprises three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space; the coordinate transformation module is used for carrying out coordinate transformation on the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space based on the transformation relation between the world coordinate system where the target three-dimensional real scene space is located and the screen coordinate system to obtain the two-dimensional coordinates of each space anchor point in the screen coordinate system; and the anchor point rendering module is used for rendering each space anchor point in the target three-dimensional real scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and a space anchor point rendering file corresponding to the target three-dimensional real scene space, wherein the space anchor point rendering file at least comprises style information corresponding to each space anchor point and a corresponding fixed pixel value.
In yet another embodiment of the present application, another electronic device is provided. The apparatus, comprising: a memory and a processor; the memory is used for storing a computer program, and the processor is coupled with the memory for executing the computer program for implementing the steps in the method described above.
In one embodiment of the present application, a computer readable storage medium is provided, storing a computer program/instructions, which when executed by a processor, causes the processor to implement the steps in the above-described method.
According to the technical scheme provided by each embodiment of the application, firstly, the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space are subjected to coordinate conversion based on the conversion relation between the world coordinate system where the target three-dimensional real scene space is located and the screen coordinate system, and the two-dimensional coordinates of each space anchor point in the screen coordinate system are obtained; and further, rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and the style information and the fixed pixel value corresponding to each space anchor point in the space anchor point rendering file. Because the rendering space anchor point is based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, each space anchor point is endowed with a fixed pixel value in the rendering file of the space anchor point, the space anchor point is rendered on the screen based on the two-dimensional coordinate of the space anchor point in the screen coordinate system and the fixed pixel value of the space anchor point, and thus, when the space anchor point is subjected to picture amplification operation, the pixel value of the space anchor point is not changed, the definition of the space anchor point cannot become fuzzy along with the amplification of the space anchor point, and the use experience of a user is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1a is a schematic flowchart of a spatial anchor point processing method according to an exemplary embodiment of the present application;
FIGS. 1b-1f are schematic diagrams of views provided in an exemplary embodiment of the present application;
fig. 2 is a schematic structural diagram of a spatial anchor point processing apparatus according to an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
The present application provides the following embodiments to solve or partially solve the problems of the above-described aspects. In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different. In addition, the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1a illustrates a schematic flow chart of a spatial anchor point processing method according to an embodiment of the present application. The spatial anchor point processing method provided by the embodiment of the application can be operated on electronic equipment, the electronic equipment can provide a graphical user interface, and a three-dimensional real scene space corresponding to a target physical space can be displayed on the graphical user interface. The electronic device may be any intelligent device that can install an application (such as APP, an applet, or a client) and has a display screen and an interaction function, for example, the electronic device may be an intelligent handheld device, such as a smart phone and a tablet computer, a desktop device, such as a notebook computer or a desktop computer, or an intelligent wearable device, such as an intelligent watch and an intelligent bracelet, or various intelligent appliances with a display screen, such as an intelligent television, an intelligent large screen, or an intelligent robot. The application may be a standalone APP or an applet that runs on a standalone APP.
The target physical space may be any physical space capable of providing online services based on its three-dimensional real world space, and may be, for example, a physical automobile space, a physical house space, a physical exhibition space, a shared office space, and the like. Taking a physical automobile space as an example, an online car watching service can be provided based on a three-dimensional real scene space of the physical automobile space; taking physical house space as an example, online house-watching service can be provided based on the three-dimensional real scene space; taking the physical exhibition space as an example, the online browsing exhibit service can be provided based on the three-dimensional real scene space; taking the shared office space as an example, the service of viewing, reserving or renting the office space on line can be provided based on the three-dimensional real scene space.
Regardless of the target physical space, the user can pan, interact or roam in the corresponding three-dimensional real space, so as to use the online service based on the three-dimensional real space. In the process of panning or roaming of a user in the three-dimensional real space, different visual angle pictures in the three-dimensional real space need to be displayed according to the visual angle change and the requirement of the user, and the detailed information of each real object can be provided for the user. The three-dimensional real scene space is a three-dimensional real scene space, wherein the real scene objects in the three-dimensional real scene space are different according to the target physical space and the application scene thereof. Taking the example that the physical automobile space provides the online seeing service for the user through the three-dimensional real-scene space of the physical automobile space, the physical automobile space may be an internal space of the physical automobile, correspondingly, the three-dimensional real-scene space may be a three-dimensional real-scene space inside the physical automobile, and the real-scene objects included in the three-dimensional real-scene space may be structural models of various automobiles such as a center console, a steering wheel, a driving seat, a front passenger seat, a rear seat, a console box and the like of an automobile interior. In the following embodiments of the present application, a target physical space is taken as an internal space of a physical vehicle, and a three-dimensional real space inside the physical vehicle is taken as an example for illustration.
In the three-dimensional real scene space of the embodiment, a spatial anchor point is provided. The space anchor point can specifically and vividly position the real-scene object and the detail information thereof existing in the physical space at the corresponding position in the three-dimensional real-scene space corresponding to the physical space, and the real-scene object and the detail information thereof associated with the space anchor point can be displayed in the three-dimensional real-scene space by operating the space anchor point. In order to meet the above requirement of the user, embodiments of the present application provide a spatial anchor point processing method, which may generate a corresponding spatial anchor point for each real-world object in a three-dimensional real-world space. Meanwhile, in order to solve the problem that the spatial anchor points are not clear when the spatial anchor points are subjected to picture amplification operation, the embodiment of the application can perform coordinate conversion on the three-dimensional space coordinates of the spatial anchor points in the target three-dimensional live-action space based on the conversion relation between the world coordinate system and the screen coordinate system where the target three-dimensional live-action space is located when the spatial anchor points corresponding to the live-action objects in the three-dimensional space are generated, so as to obtain the two-dimensional coordinates of the spatial anchor points in the screen coordinate system; and further, rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and the style information and the fixed pixel value corresponding to each space anchor point in the space anchor point rendering file. Because the rendering space anchor point is based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, each space anchor point is endowed with a fixed pixel value in the rendering file of the space anchor point, the space anchor point is rendered on the screen based on the two-dimensional coordinate of the space anchor point in the screen coordinate system and the fixed pixel value of the space anchor point, and thus, when the space anchor point is subjected to picture amplification operation, the pixel value of the space anchor point is not changed, the definition of the space anchor point cannot become fuzzy along with the amplification of the space anchor point, and the use experience of a user is ensured.
Specifically, as shown in fig. 1a, the spatial anchor point processing method provided in the embodiment of the present application includes:
101. receiving a space anchor point generation request initiated aiming at the target three-dimensional real-scene space, wherein the space anchor point generation request comprises three-dimensional space coordinates of each space anchor point in the target three-dimensional real-scene space;
102. based on the conversion relation between the world coordinate system and the screen coordinate system of the target three-dimensional real scene space, the coordinate transformation is carried out on the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space,
obtaining two-dimensional coordinates of each space anchor point in a screen coordinate system;
103. and rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and a space anchor point rendering file corresponding to the target three-dimensional real-scene space, wherein the space anchor point rendering file at least comprises style information corresponding to each space anchor point and a corresponding fixed pixel value.
In this embodiment, after receiving a space anchor point generation request initiated for a target three-dimensional real-scene space, an electronic device performs coordinate transformation on three-dimensional space coordinates of each space anchor point included in the space anchor point generation request in the target three-dimensional real-scene space based on a conversion relationship between a world coordinate system in which the target three-dimensional real-scene space is located and a screen coordinate system, so as to obtain two-dimensional space coordinates of each space anchor point in the screen coordinate system. It should be noted that the two-dimensional spatial coordinates of each spatial anchor point in the screen coordinate system may dynamically change with the change of the viewing angle of the three-dimensional real-scene spatial scene currently displayed on the screen.
Further, after obtaining the two-dimensional space coordinates of each space anchor point in the screen coordinate system, each space anchor point may be rendered in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and the space anchor point rendering file corresponding to the target three-dimensional real-scene space.
The present embodiment does not limit the type of the spatial anchor rendering file, for example, the spatial anchor rendering file may include: HTML (HyperText Mark-up Language) file and CSS
(Cascading Style Sheets) document, HTML, a standard language for making web pages, is a language used by web browsers that allows images and objects to be embedded and can be used to create interactive forms that structure information, such as titles, paragraphs, lists, etc., and also to describe to some extent the appearance and semantics of a document. CSS is a computer language used to represent file styles such as HTML (an application of the standard universal markup language) or XML (a subset of the standard universal markup language). The CSS can not only statically modify the web page, but also dynamically format elements of the web page in coordination with various scripting languages.
In this embodiment, the HTML file at least includes fixed pixel values corresponding to each spatial anchor, the fixed pixel values can be used to set the number of pixels occupied by each spatial anchor when displayed on the screen, the CSS file at least includes style information corresponding to each spatial anchor, the style information can be used to render the display form of each spatial anchor, the display form of each spatial anchor is not limited in this embodiment, for example, the display form may be a static single graphic or a mixed graphic, or a dynamic single graphic or a mixed graphic, the single graphic may be a simple graphic such as a rectangle or a square, as shown in fig. 1b, the display form of the spatial anchor may be a mixed graphic of a dynamic flashing light. Because the rendering space anchor point is based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, each space anchor point is endowed with a fixed pixel value in the rendering file of the space anchor point, the space anchor point is rendered on the screen based on the two-dimensional coordinate of the space anchor point in the screen coordinate system and the fixed pixel value of the space anchor point, and thus, when the space anchor point is subjected to picture amplification operation, the pixel value of the space anchor point is not changed, the definition of the space anchor point cannot become fuzzy along with the amplification of the space anchor point, and the use experience of a user is ensured.
In this embodiment, based on the conversion relationship between the world coordinate system in which the target three-dimensional real-scene space is located and the screen coordinate system, the coordinate transformation is performed on the three-dimensional space coordinates of each spatial anchor point in the target three-dimensional real-scene space, so as to obtain the two-dimensional coordinates of each spatial anchor point in the screen coordinate system, according to a specific implementation manner as follows: firstly, multiplying a three-dimensional space coordinate of each space anchor point in a target three-dimensional real scene space by a view matrix to obtain an intermediate state coordinate of each space anchor point in an observation coordinate system; further, multiplying the intermediate state coordinates of each spatial anchor point in the observation coordinate system by the projection matrix of the camera to obtain the two-dimensional coordinates of each spatial anchor point in the standard coordinate system; and further, according to the conversion relation between the standard coordinate system and the screen coordinate system, converting the x-axis coordinate and the y-axis coordinate of each spatial anchor point in the standard coordinate system into the width coordinate and the height coordinate of each spatial anchor point in the screen coordinate system respectively.
In the three-dimensional real-scene space, the viewpoint position and the sight line direction of the user can be regarded as the position and the orientation of a camera for shooting the three-dimensional real-scene space, the viewpoint position (or the position of the camera) of the user is used as a coordinate origin, and the world coordinates of the space anchor point of each physical model in the three-dimensional real-scene space are changed along with the change of the viewpoint position and the sight line direction of the user (or the position and the orientation of the camera). The view matrix is a matrix used for determining the angle and the position of the camera, the coordinate system where the camera is located is an observation coordinate system, the view matrix can be used for converting the coordinates of each space anchor point from a world coordinate system to the observation coordinate system, and based on the view matrix, the intermediate state coordinates of each space anchor point in the observation coordinate system can be obtained by multiplying the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space by the view matrix, and the intermediate state coordinates are also three-dimensional coordinates.
Further, after obtaining the three-dimensional intermediate state coordinates of each spatial anchor point in the observation coordinate system, in order to obtain the two-dimensional coordinates in the corresponding screen coordinate system, the three-dimensional intermediate state coordinates in the observation coordinate system need to be converted into the two-dimensional coordinates in the corresponding standard coordinate system, and this process is equivalent to converting the three-dimensional intermediate state coordinates of each spatial anchor point in the observation coordinate system into the two-dimensional coordinates in the standard coordinate system, based on which, the intermediate state coordinates of each spatial anchor point in the observation coordinate system can be multiplied by the projection matrix of the camera to obtain the two-dimensional coordinates of each spatial anchor point in the standard coordinate system. For example, when the camera faces the Z direction, the three-dimensional intermediate state coordinates of each spatial anchor point are projected to the two-dimensional coordinates on the XOY plane, the two-dimensional coordinate system on the XOY plane is the standard coordinate system of the spatial anchor point, and the two-dimensional coordinates in the two-dimensional coordinate system projected to the XOY plane are the two-dimensional coordinates of each spatial anchor point in the standard coordinate system.
It should be noted that, in the standard coordinate system in this embodiment, the lengths of the two coordinate axes have a set range, and a certain conversion relationship exists between the standard coordinate system and the screen coordinate system, the conversion relationship between the standard coordinate system and the screen coordinate system may be that a ratio of a coordinate length of a two-dimensional coordinate of a spatial anchor point in the standard coordinate system in a direction of one coordinate axis to the length of the coordinate axis is equal to a ratio of a coordinate length of the two-dimensional coordinate in the standard coordinate system in the direction of one coordinate axis to the screen width in the screen coordinate system, and a ratio of a coordinate length of the two-dimensional coordinate of the spatial anchor point in the standard coordinate system in the direction of the other coordinate axis to the length of the coordinate axis is equal to a ratio of a coordinate length of the two-dimensional coordinate in the direction of the other coordinate axis in the screen coordinate system to the screen width. Assuming that the range of two coordinate axes in the standard coordinate system is (-1,1), the two-dimensional coordinate of the spatial anchor point in the standard coordinate system is (X1, Y1), the two-dimensional coordinate of the screen coordinate system corresponding to the spatial anchor point is (X, Y), the width direction of the screen corresponds to the X axis of the screen coordinate system and the screen width is w, the length direction of the screen corresponds to the Y axis of the screen coordinate system and the screen length is w, then the transformation relationship between the standard coordinate system and the screen coordinate system is: x1+1)/2 x/w, and (y1+1)/2 y/h. That is, after obtaining the two-dimensional coordinates of each spatial anchor point in the standard coordinate system, the x-axis coordinates and the y-axis coordinates of each spatial anchor point in the standard coordinate system may be converted into the width coordinates and the height coordinates of each spatial anchor point in the screen coordinate system according to the above conversion relationship between the standard coordinate system and the screen coordinate system, where the width coordinates of each spatial anchor point in the screen coordinate system are x ═ x1+1) w/2, and the height coordinates are y ═ y1+1) h/2, respectively.
No matter which coordinate system the coordinate of the spatial anchor point is in, the length of each coordinate axis of the coordinate system in which the coordinate system is located has a set range, so when the coordinate of the spatial anchor point needs to be within the set range of the length of each coordinate axis and the spatial anchor point appears in the coordinates of the screen, the spatial anchor point can be rendered on the screen. Therefore, before rendering each space anchor point based on the two-dimensional coordinates of each space anchor point, it is necessary to determine from two dimensions whether each space anchor point can be successfully rendered in the screen coordinate system. Judging whether the two-dimensional space coordinates of each space anchor point under the standard coordinate system are in the set range of the standard coordinate system or not; the second dimension is to determine whether each spatial anchor point can appear on the screen. It should be noted that the judgment of the two dimensions is a correlation step, the judgment of the first dimension is a basis for the judgment of the second dimension, and when the judgment result of the first dimension is yes, the judgment of the second dimension is performed, and the judgment of the two dimensions is not possible.
Optionally, for the first dimension, a specific embodiment is: and judging whether the absolute values of the x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the space anchor point in the standard coordinate system are all smaller than the unit length of the standard coordinate system or not for each space anchor point. For the second dimension, each spatial anchor point is actually a specific document object model (dom element) and occupies a certain space, and whether the spatial anchor point is located in the set range of the screen coordinate system is judged by simply using an abstract two-dimensional coordinate of the spatial anchor point in the screen coordinate system, so that the accuracy is not high. In order to more accurately judge whether the screen coordinate of the spatial anchor point is located within the set range of the screen coordinate system, whether the central coordinate of the spatial anchor point in the screen coordinate system is located within the set range of the screen coordinate system may be firstly judged. The specific implementation mode is as follows: determining the central coordinate of each spatial anchor point in a screen coordinate system; and judging whether the absolute values of the two axis coordinates of the central coordinate of each space anchor point are both smaller than the unit length of the screen coordinate system. One specific embodiment of determining the center coordinates of each spatial anchor point in the screen coordinate system is as follows: and respectively subtracting 1/2dom elements in the corresponding coordinate directions from the transverse coordinates and the longitudinal coordinates of the space anchor points to obtain the central coordinates of the space anchor points in the screen coordinate system. Based on the above, whether the width coordinate and the height coordinate of the space anchor point in the screen coordinate system are respectively smaller than the width and the height of the screen is judged.
If the judgment results are yes, rendering a space anchor point in the target three-dimensional real scene space based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, and the style information and the fixed pixel value corresponding to the space anchor point defined in the space anchor point rendering file; if the judgment result is negative, the rendering operation is not performed on the spatial anchor point temporarily, and the spatial anchor point is rendered in real time when the spatial anchor point can appear on the screen. When the visual angle picture of the three-dimensional real scene space where the spatial anchor point is located appears on the screen, the spatial anchor point appears on the screen, and at the moment, the spatial anchor point needs to be rendered on the screen. The view angle picture displayed on the screen may vary with a scene roaming or a user interactive operation.
In this embodiment, based on the two-dimensional coordinates of each spatial anchor point and the spatial anchor point rendering file corresponding to the target three-dimensional real space, a specific implementation manner of rendering each spatial anchor point in the target three-dimensional real space is as follows: based on the two-dimensional coordinates of each spatial anchor point, rendering each initial anchor point model in the target three-dimensional real scene space by combining the fixed pixel value corresponding to each spatial anchor point in the HTML file, so that the definition of the generated spatial anchor point is not changed after amplification, and the use experience of a user is improved; and further, performing style correction on each initial anchor point model by using style information corresponding to each spatial anchor point in the CSS file to obtain each spatial anchor point. After each spatial anchor point is obtained through rendering, a user can check the detail information of the real-scene object corresponding to each spatial anchor point through each spatial anchor point existing in each view angle picture displayed on a graphical user interface provided by the electronic equipment. That is, the spatial anchor point is associated with its corresponding live-action object.
In this embodiment, when a user has a need to view some real-scene objects in a target three-dimensional real-scene space, in response to a viewing operation of the user on some real-scene objects, a view angle picture corresponding to the three-dimensional real-scene space is displayed on a graphical user interface provided by an electronic device, and the view angle picture changes with a change of a user view angle, that is, each view angle has a corresponding view angle picture, and with a change of the user view angle, a plurality of view angle pictures appear, each view angle picture is a local picture in the target three-dimensional real-scene space, each view angle picture includes a real-scene object associated with a spatial anchor point, or includes a real-scene object not associated with a spatial anchor point, and the real-scene object in each view angle picture may be one or more.
Further, when a user has detailed information for viewing a live-action object, in response to a trigger operation of the user on a space anchor point associated with the live-action object, an information popup window of the space anchor point is displayed on a graphical user interface, where the information popup window is used to display a navigation tag of each live-action object and the detailed information of the space anchor point, or display detailed information corresponding to other live-action objects based on navigation tags of other live-action objects, and the style of the information popup window is not limited, and may include a popup window, or may be divided into two or more popup windows.
For example, in an optional embodiment, the information popup window includes a navigation window and a detail window, where the navigation window includes live-action object navigation tags corresponding to the spatial anchor points, and the detail window is used to display detail information of the live-action object corresponding to the currently triggered spatial anchor point or live-action object navigation tag. The navigation window also has a function of scalable display, specifically, in response to the amplification operation of the navigation window by the user, the navigation window is amplified, at this time, the live-action object navigation tags corresponding to the spatial anchor points are all displayed in the amplified navigation window, and in response to the reduction operation of the navigation window by the user, the navigation window is reduced, at this time, only the live-action object navigation tags corresponding to a part of the spatial anchor points are displayed in the reduced navigation window.
For convenience of description, the view angle picture displayed on the gui for the first time in the current task may be referred to as a first view angle picture, where the first view angle picture is a partial picture in the target three-dimensional real scene space and includes a first real scene object and a first spatial anchor point associated therewith, the first real scene object refers to a real scene object appearing in the first view angle picture and associated with a spatial anchor point, and the first view angle picture may also include a real scene object not associated with a spatial anchor point. The first real-scene objects may be one or more, for example, the first view angle view is a view angle view of the camera at the position of the armrest box and facing the front, and the first view angle view is as shown in fig. 1b, at this time, the real-scene objects included in the view angle view are the center console and the center console spatial anchor point associated therewith. Further, as shown in fig. 1b, other real objects, such as a steering wheel, a driving seat, a passenger seat, and the like, are also included in the perspective view, and the real objects are not associated with the spatial anchor points. As the user's view angle changes, other multiple view angle pictures appear, which may be called a second view angle picture, a third view angle picture, and so on.
And when the user has a requirement for viewing the detail information of the first live-action object, responding to the trigger operation of the user on the first space anchor point, and displaying an information popup window on the graphical user interface, wherein the information popup window contains the detail information of the first live-action object. In response to the trigger operation of the first spatial anchor point, a specific embodiment of displaying the information popup on the graphical user interface is as follows: and responding to the triggering operation of the user on the first spatial anchor point, displaying a navigation window, highlighting a first real-scene object navigation label corresponding to the first spatial anchor point in the navigation window, displaying a detail window, and displaying the detail information of the first real-scene object in the detail window. Optionally, as shown in fig. 1c, the detail information of the first real-scene object includes a partially enlarged picture of the first real-scene object and text information of the function, material, and service life of the first real-scene object. The detail information of the first real-world object may be different depending on the first real-world object. For example, if the first real-scene object is a steering wheel, the detailed information may be the material, the using method, other keys and functions included in the steering wheel; if the first live-action object is the console, the detailed information may be a locally enlarged picture, a use method and a function of the console.
Further optionally, the user may further perform a trigger operation on the first spatial anchor, and then, in response to the further trigger operation on the first spatial anchor by the user, add an associated line between the first spatial anchor and the corresponding information popup, where the associated line may further represent the detailed information of the information popup, where the information popup is used to represent the first spatial anchor, where a display form of the associated line is not limited, and the associated line may be a straight line or a dotted line. Taking the first spatial anchor point as the central position as an example, the corresponding first spatial anchor point is the central control position spatial anchor point, and the display contents of the corresponding view picture and the information popup are shown in fig. 1 d. Through the association line, a user can conveniently know which spatial anchor point the detailed information displayed in the current information window is associated with, that is, the user can clearly know which spatial anchor point is triggered.
In this embodiment, when responding to the trigger operation of the user on the spatial anchor point, when displaying the information pop window on the graphical user interface, a situation that the information pop window blocks the corresponding spatial anchor point may occur, and in order to avoid the spatial anchor point being blocked by the corresponding information pop window, still taking the spatial anchor point as the first spatial anchor point as an example, in response to the trigger operation on the first spatial anchor point, an optional implementation manner of displaying the information pop window on the graphical user interface is as follows: responding to touch operation of the first space anchor point, and acquiring two-dimensional coordinates of the first space anchor point in a first visual angle picture and screen position and size information of an information popup window; then, determining whether the first space anchor point is shielded by the information pop window according to the two-dimensional coordinate of the first space anchor point in the first visual angle picture and the screen position and size information of the information pop window; if the first space anchor point is shielded, the relative display position of the first space anchor point and the information popup window is adjusted, and the information popup window is displayed on the graphical user interface according to the adjusted relative display position, so that the first space anchor point is not shielded by the information popup window. The screen position of the information popup window is not limited, the information popup window can be arranged at the upper position, the lower position, the left side, the right side or the center of the screen, and the size information refers to the length and the width of the information popup window.
According to the two-dimensional coordinate of the first spatial anchor point in the first visual angle picture and the screen position and size information of the information pop-up window, whether the first spatial anchor point is shielded by the information pop-up window is determined according to the following optional implementation mode: determining the outer edge of a target on the information popup window and the distance from the first space anchor point to the outer edge of the target according to the screen position and size information of the information popup window and the two-dimensional coordinate of the first space anchor point in the first view angle picture, wherein the outer edge of the target refers to the outer edge of the information popup window closest to the first space anchor point; if the distance between the outer edge of the information popup window closest to the first space anchor point and the first space anchor point is larger than or equal to the fixed distance value, it can be determined that the first space anchor point is not shielded by the information popup window; if the distance between the outer edge of the information popup window closest to the first space anchor point and the first space anchor point is smaller than the set distance value, it can be determined that the first space anchor point is shielded by the information popup window.
Further optionally, when the first spatial anchor point is not occluded by the information popup, the information popup may be displayed on the graphical user interface directly based on the relative display positions of the current first spatial anchor point and the information popup.
Further optionally, when the first space anchor point is shielded by the information popup, adjusting the relative display positions of the first space anchor point and the information popup, and displaying the information popup on the graphical user interface according to the adjusted relative display positions, so that the first space anchor point is not shielded by the information popup. Several optional embodiments thereof are as follows:
the method I comprises the following steps: and keeping the screen position of the information popup unchanged, rotating the display visual angle of the first visual angle picture to rotate the first space anchor point to the outside of the information popup, and displaying the information popup on the graphical user interface according to the screen position of the information popup.
Optionally, a specific embodiment of rotating the display view angle of the first view angle picture to rotate the first spatial anchor point out of the information pop-up window while keeping the screen position of the information pop-up window unchanged is as follows: determining the outer edge of a target on the information popup window and the distance from the first space anchor point to the outer edge of the target according to the screen position and size information of the information popup window and the two-dimensional coordinate of the first space anchor point in the first visual angle picture, wherein the outer edge of the target is the outer edge of the information popup window closest to the first space anchor point; determining a rotation direction and a rotation angle in the rotation direction according to the position of the outer edge of the target and the distance from the first space anchor point to the outer edge of the target; and rotating the display visual angle of the first visual angle picture according to the rotating direction and the rotating angle so as to rotate the first space anchor point to the outside of the information popup window.
It should be noted that the display view angle of the rotated first view angle picture is actually the view angle of the camera in the rotated target three-dimensional real scene space, and the rotation direction and the rotation angle in the rotation direction are actually the rotation direction of the camera in the target three-dimensional real scene picture and the rotation angle of the camera in the rotation direction. For convenience of understanding and calculation, the rotation direction of the camera and the rotation angle of the camera in the rotation direction can be split into rotation angle components on three coordinate planes in a three-dimensional coordinate system where the camera is located, the camera is reversely rotated by corresponding angles on the three coordinate planes according to the rotation angle components of the camera on the three coordinate planes, and the first space anchor point can be rotated to the outside of the information pop-up window.
The second method comprises the following steps: and keeping the display visual angle of the first visual angle picture unchanged, adjusting the screen position of the information popup window to enable the information popup window not to shield the first space anchor point any more, and displaying the information popup window on the graphical user interface according to the adjusted screen position.
Optionally, the display view angle of the first view angle picture is kept unchanged, and the screen position of the information pop window is adjusted, so that the information pop window does not block the first space anchor point any more, in a specific embodiment as follows: determining the outer edge of a target on the information popup window and the distance from the first space anchor point to the outer edge of the target according to the screen position and size information of the information popup window and the two-dimensional coordinate of the first space anchor point in the first view angle picture, wherein the outer edge of the target refers to the outer edge of the information popup window closest to the first space anchor point; and adjusting the screen position of the information popup window according to the position of the outer edge of the target and the distance from the outer edge of the target to the first space anchor point, so that the information popup window does not shield the first space anchor point any more.
The third method comprises the following steps: the display visual angle of the first visual angle picture and the screen position of the information popup are cooperatively adjusted, so that the first space anchor point is not shielded by the information popup, and the information popup is displayed on the graphical user interface according to the screen position of the information popup.
Optionally, the screen position of the information popup window is kept unchanged, the display visual angle of the first visual angle picture is rotated, after the first visual angle picture is rotated by a certain angle, the display visual angle of the current visual angle picture is kept unchanged, the screen position of the information popup window is adjusted, and the adjustment is repeated in such a way, so that the information popup window does not shield the first space anchor point any more, and the information popup window is displayed on the graphical user interface according to the adjusted screen position. Or, the display visual angle of the first visual angle picture is kept unchanged, the screen position of the information popup is adjusted within a certain range, then the current screen position of the information popup is kept unchanged, the display visual angle of the first visual angle picture is rotated by a certain angle, and the adjustment is repeated in such a way, so that the information popup does not shield the first space anchor point any more, and the information popup is displayed on the graphical user interface according to the adjusted screen position. For specific implementation of each step in this manner, reference may be made to the contents in the foregoing first manner and second manner, and details are not described here.
In this embodiment, under the condition that the information window is displayed on the screen, if the user has a need to view the detail information of the second real-world object, the user may initiate a trigger operation on the second real-world object navigation tag corresponding to the second real-world object in the navigation window, and may further respond to the trigger operation of the user on the second real-world object navigation tag, display the detail information of the second real-world object corresponding to the second real-world object navigation tag in the detail window, and switch the currently displayed first view angle picture on the graphical user interface to the second view angle picture where the second real-world object is located. The second visual angle picture at least comprises a second real scene object and a second space anchor point related to the second real scene object; the second live-action object navigation tag is a navigation tag of other live-action objects different from the first live-action object navigation tag, and accordingly, the detail window can be changed from displaying the detail information of the first live-action object to displaying the detail information of the two live-action objects. It should be noted that the detail information of the second real-world object may include, but is not limited to: the local enlarged picture of the second live-action object and the character information of the second live-action object, such as function, material, service life, maintenance correlation and the like. In addition, the function, material, service life, and maintenance-related information of the live-action object may be different depending on the live-action object. For example, when the first live-action object is the center console and triggers its spatial anchor point, what is shown in the message window is: the picture is enlarged locally on the center console, the functions are to check road conditions and play audio, and the service life is N years and corresponding maintenance related information. When the second live-action object is the armrest box, in response to the trigger operation on the navigation tag of the armrest box, the second visual picture is as shown in fig. 1e, and at this time, what is shown in the message popup is: the local enlarged picture of the armrest box has the function of placing arms when driving, and the service life is M years and corresponding maintenance related information.
In this embodiment, when a user has a requirement for view angle switching, a new view angle is determined in response to a view angle switching operation of the user on a graphical user interface, where the view angle switching operation refers to an operation of changing a view angle of a three-dimensional live-action picture, for example, the view angle switching operation may be an operation of directly dragging, sliding, or rotating the three-dimensional live-action picture, or may be an operation of completing view angle switching by clicking some view angle switching controls. After the new angle of view is determined, a third angle of view image adapted to the new angle of view is displayed on the graphical user interface, and the third angle of view image is also a local image in the target three-dimensional live-action space and includes a third live-action object and a third space anchor point associated therewith. For example, the view switching operation is to slide the three-dimensional real-scene space picture to the right for a certain distance, and with the operation of sliding the certain distance to the right, the three-dimensional real-scene space picture is switched to a third view picture at the driving seat, where the third view picture includes a third space anchor point, and the third view picture is as shown in fig. 1 f. It should be noted that the third view picture and the second view picture may be the same picture or different pictures, and the spatial anchor points in other view pictures may also be displayed in the third view picture at the same time. Alternatively, the other perspective picture refers to a perspective picture displayed before the third perspective picture, and preferably, may be a perspective picture displayed most recently before the third perspective picture. Or, in another optional embodiment, the other perspective picture refers to a perspective picture associated with the third perspective picture, and the perspective picture may be displayed before the third perspective picture or may not be displayed, which is not limited herein. For example, the third view angle picture is a view angle picture where the rear seat is located, and the view angle picture associated with the third view angle picture can be a view angle picture where the steering wheel is located, so that not only the spatial anchor point in the view angle picture but also the spatial anchor point in the view angle picture where the steering wheel is located can be displayed in the view angle picture where the rear seat is located, and thus, a user can directly switch to the view angle picture where the steering wheel is located from the current view angle picture by triggering the spatial anchor point in the view angle picture where the steering wheel is located, correspondingly, the information popup window can also follow to make a synchronous action, for example, the navigation tag can be synchronously switched, and the detailed information displayed in the information window can also be synchronously switched.
Further, when the user has a need to view the detail information of the third spatial anchor point, responding to the triggering operation of the user on the third spatial anchor point, highlighting the third real-scene object navigation tag corresponding to the third spatial anchor point in the navigation window, and displaying the detail information of the third real-scene object in the detail window. For example, when the third spatial anchor point is a steering wheel spatial anchor point, the corresponding view picture is as shown in fig. 1 f.
Furthermore, when the user needs to know the detail information of other spatial anchors except the first spatial anchor, the second spatial anchor and the third spatial anchor, the navigation label of other real-scene objects corresponding to the other spatial anchors in the navigation window can be highlighted in response to the triggering operation of the user on the other spatial anchors, and the detail information of the other real-scene objects is displayed in the detail window.
The technical scheme of the embodiment of the application is explained in detail below by combining an online car-watching scene.
An online car watching APP is installed on an electronic device of a user, such as a mobile phone. For a user, the online car watching APP can be opened, and a display page of a three-dimensional space of the automobile interior can be accessed. The first picture of the three-dimensional space display page entering the automotive interior is a first visual angle picture, the first visual angle picture comprises a first real scene object and a first space anchor point corresponding to the first real scene object, as shown in fig. 1b, the first visual angle picture is a visual angle picture with a camera located at the position of the armrest box and facing the front, and at the moment, the real scene objects visible in the visual angle picture are a center console and a steering wheel, and a center console space anchor point and a steering wheel space anchor point which are associated with the center console space anchor point and the steering wheel space anchor point.
In response to the triggering operation of the user on the first space anchor point, for example, in response to the triggering operation of the user on the space anchor point of the center console, an information popup window is displayed on the right side of the position where the space anchor point of the center console is not shielded, the information popup window includes details of a navigation window and a live-action object, a navigation label of the center console in the navigation window is highlighted, and the details of the center console are displayed in the details window, wherein the details of the center console include a locally enlarged picture of the center console and text information of the function, material, service life and the like of the center console. Based on the information pop window, in response to further triggering operation of the console anchor point by the user, an association line is added between the console space anchor point and the information pop window. In addition, the navigation window in the information popup may also be expanded or reduced in response to a zoom operation of the user, as shown in fig. 1d, after the navigation window is expanded, navigation tags of more live-action objects may be displayed in the navigation window.
Further, as shown in fig. 1e, on the basis of displaying the console information popup, the highlighted navigation tag may be switched from the navigation tag of the console to the navigation tag of the console box based on the triggering operation of the user on the navigation tag of the console box in the navigation window, the detailed information is switched from the detailed information of the console to the detailed information of the navigation tag of the console box, and the currently displayed first view angle picture on the graphical user interface is also switched to the second view angle picture where the console box is located. And responding to the triggering operation of the user on the hidden information popup control, and enabling the information popup to enter a hidden state.
And further, responding to the operation that the user drags the three-dimensional live-action picture to the right, determining a new visual angle, and displaying a third visual angle picture matched with the new visual angle on the graphical user interface. Taking the third view angle picture as the picture shown in fig. 1f as an example, the third picture at least includes the steering wheel and the associated steering wheel space anchor point, and in response to the user's trigger operation on the steering wheel space anchor point, an information popup window is displayed, the steering wheel navigation tag in the navigation window is highlighted, and the detailed information of the steering wheel is displayed in the detailed window.
It should be noted that, with switching of the view angle picture, a spatial anchor point corresponding to the real-world scene object in the current view angle picture, which can be displayed in the current view angle picture, is generated in real time based on the two-dimensional coordinates of the spatial anchor point in the screen coordinate system and a spatial anchor point rendering file corresponding to the target three-dimensional real-world scene space, and specific implementation of generating the spatial anchor point may refer to the foregoing embodiments, which are not described herein again.
It should be further noted that, when the popup information of the space anchor is displayed on the graphical user interface in response to the touch operation of the user on the space anchor, if the space anchor is blocked by the information popup, the relative display positions of the first space anchor and the information popup are adjusted, and the information popup is displayed on the graphical user interface according to the adjusted relative display positions, so that the space anchor is not blocked by the information popup, and reference may be made to the foregoing embodiment for a specific embodiment of adjusting that the space anchor is not blocked by the information popup, which is not described herein again.
Fig. 2 is a schematic structural diagram of a spatial anchor point processing apparatus according to an exemplary embodiment of the present application.
As shown in fig. 2, the apparatus includes:
the receiving module 21 is configured to receive a space anchor point generation request initiated for the target three-dimensional live-action space, where the space anchor point generation request includes three-dimensional space coordinates of each space anchor point in the target three-dimensional live-action space;
the coordinate transformation module 22 is configured to perform coordinate transformation on the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space based on a transformation relationship between a world coordinate system in which the target three-dimensional real scene space is located and a screen coordinate system, so as to obtain two-dimensional coordinates of each space anchor point in the screen coordinate system;
and the anchor point rendering module 23 is configured to render each spatial anchor point in the target three-dimensional real-scene space based on a two-dimensional coordinate of each spatial anchor point in the screen coordinate system and a spatial anchor point rendering file corresponding to the target three-dimensional real-scene space, where the spatial anchor point rendering file at least includes style information corresponding to each spatial anchor point and a corresponding fixed pixel value.
Further optionally, rendering the file by the spatial anchor includes: the system comprises an HTML file and a CSS file, wherein the HTML file at least comprises fixed pixel values corresponding to all spatial anchor points, and the CSS file at least comprises style information corresponding to all spatial anchor points; based on this, the anchor point rendering module 23 is specifically configured to, when rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point and the space anchor point rendering file corresponding to the target three-dimensional real-scene space,: rendering each initial anchor point model in a target three-dimensional real-scene space based on the two-dimensional coordinates of each spatial anchor point and by combining the fixed pixel values corresponding to each spatial anchor point in the HTML file; and carrying out style correction on each initial anchor point model by utilizing style information corresponding to each spatial anchor point in the CSS file to obtain each spatial anchor point.
Further, still include: the display module 24 is configured to display a first view picture on a graphical user interface provided by the electronic device, where the first view picture is a partial picture in the target three-dimensional real scene space and includes a first real scene object and a first space anchor point associated with the first real scene object; and responding to the trigger operation of the first spatial anchor point, and displaying an information popup on a graphical user interface, wherein the information popup contains detailed information of the first live-action object.
Further optionally, when the display module 24 is configured to display the information popup on the graphical user interface in response to the triggering operation on the first spatial anchor point, specifically: responding to touch operation of the first space anchor point, and acquiring two-dimensional coordinates of the first space anchor point in a first visual angle picture and screen position and size information of an information popup window; determining whether the first space anchor point is shielded by the information pop-up window or not according to the two-dimensional coordinate of the first space anchor point in the first visual angle picture and the screen position and size information of the information pop-up window; if the first space anchor point is shielded, the relative display position of the first space anchor point and the information popup window is adjusted, and the information popup window is displayed on the graphical user interface according to the adjusted relative display position, so that the first space anchor point is not shielded by the information popup window.
Further optionally, the display module 24 is specifically configured to, when the display module is configured to adjust the relative display positions of the first spatial anchor point and the information popup, and display the information popup on the graphical user interface according to the adjusted relative display positions,: keeping the screen position of the information popup unchanged, rotating the display visual angle of the first visual angle picture to rotate the first space anchor point to the outside of the information popup, and displaying the information popup on a graphical user interface according to the screen position of the information popup; or, keeping the display visual angle of the first visual angle picture unchanged, adjusting the screen position of the information popup window to enable the information popup window not to shield the first space anchor point any more, and displaying the information popup window on the graphical user interface according to the adjusted screen position.
Further optionally, when the display module 24 is configured to keep the screen position of the information pop-up window unchanged, and rotate the display view of the first view picture, so as to rotate the first spatial anchor point out of the information pop-up window, specifically: determining the outer edge of a target on the information popup window and the distance from the first space anchor point to the outer edge of the target according to the screen position and size information of the information popup window and the two-dimensional coordinate of the first space anchor point in the first view angle picture, wherein the outer edge of the target refers to the outer edge of the information popup window closest to the first space anchor point; determining a rotation direction and a rotation angle in the rotation direction according to the position of the outer edge of the target and the distance from the first space anchor point to the outer edge of the target; and rotating the display visual angle of the first visual angle picture according to the rotating direction and the rotating angle so as to rotate the first space anchor point to the outside of the information popup window.
Further, the information popup window comprises a navigation window and a detail window, wherein the navigation window comprises live-action object navigation tags corresponding to the space anchors, and the detail window is used for displaying detail information of the currently triggered space anchors or live-action objects corresponding to the live-action object navigation tags; based on this, the display module 24, when configured to display the information popup on the graphical user interface in response to the triggering operation on the first spatial anchor, is specifically configured to: and responding to the trigger operation of the first spatial anchor point, displaying a navigation window, highlighting a first real-scene object navigation label corresponding to the first spatial anchor point in the navigation window, displaying a detail window, and displaying the detail information of the first real-scene object in the detail window.
Further, the display module 24 is further configured to: responding to the triggering operation of the second live-action object navigation tag, displaying the detail information of the second live-action object corresponding to the second live-action object navigation tag in a detail window, and switching a first visual angle picture currently displayed on the graphical user interface to a second visual angle picture where the second live-action object is located; the second visual angle picture at least comprises a second real scene object and a second space anchor point related to the second real scene object; the second live-action object navigation tag is another live-action object navigation tag different from the first live-action object navigation tag.
Further, the display module 24 is further configured to: responding to the visual angle switching operation on the graphical user interface, and determining a new visual angle; and displaying a third visual angle picture matched with the new visual angle on the graphical user interface, wherein the third visual angle picture is a local picture in the target three-dimensional real scene space and comprises a third real scene object and a third space anchor point related to the third real scene object.
Further, a navigation window and a detail window are displayed on the third visual angle picture, and detail information of the first live-action object is displayed in the detail window; based on this, the display module 24 is further configured to: and in response to the triggering operation of the third spatial anchor point, highlighting a third real-scene object navigation label corresponding to the third spatial anchor point in the navigation window, and displaying the detail information of the third real-scene object in the detail window.
Further optionally, the coordinate transformation module 22 is specifically configured to, when configured to perform coordinate transformation on the three-dimensional space coordinate of each space anchor point in the target three-dimensional real space based on a conversion relationship between the world coordinate system in which the target three-dimensional real space is located and the screen coordinate system, to obtain the two-dimensional coordinate of each space anchor point in the screen coordinate system: multiplying the three-dimensional space coordinates of each space anchor point in the target three-dimensional real-scene space by a view matrix to obtain intermediate state coordinates of each space anchor point in an observation coordinate system, wherein the view matrix is determined according to the position and the orientation of a camera in the target three-dimensional real-scene space; multiplying the intermediate state coordinates of each spatial anchor point in the observation coordinate system by the projection matrix of the camera to obtain the three-dimensional coordinates of each spatial anchor point in the standard coordinate system; and respectively converting the x-axis coordinate and the y-axis coordinate of each space anchor point in the standard coordinate system into the width coordinate and the height coordinate of each space anchor point in the screen coordinate system according to the conversion relation between the standard coordinate system and the screen coordinate system.
Further optionally, the anchor point rendering module 23 is specifically configured to, when rendering each spatial anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each spatial anchor point in the screen coordinate system and the spatial anchor point rendering file corresponding to the target three-dimensional real-scene space,: judging whether absolute values of x-axis coordinates, y-axis coordinates and z-axis coordinates of the spatial anchor points in a standard coordinate system are all smaller than the unit length of the standard coordinate system or not aiming at each spatial anchor point; judging whether the sum of the width coordinate and the height coordinate of the space anchor point in the screen coordinate system and the sum of the width and the height of the dom element are respectively smaller than the width and the height of the screen; if the judgment results of the judgment operations are yes, rendering the space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, and the style information and the fixed pixel value corresponding to the space anchor point defined in the space anchor point rendering file.
For the detailed implementation of the principle and each step of the specific implementation of each module or unit in the embodiment of the present application, reference may be made to the description of the same or corresponding step in the foregoing, and details are not repeated herein.
Fig. 3 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application. As shown in fig. 3, the electronic apparatus includes: a memory 30a and a processor 30 b; the memory 30a is adapted to store a computer program, and the processor 30b is coupled to the memory 30a for executing the computer program for implementing the steps of:
receiving a space anchor point generation request initiated aiming at the target three-dimensional real-scene space, wherein the space anchor point generation request comprises three-dimensional space coordinates of each space anchor point in the target three-dimensional real-scene space; based on the conversion relation between the world coordinate system where the target three-dimensional live-action space is located and the screen coordinate system, carrying out coordinate transformation on the three-dimensional space coordinates of each space anchor point in the target three-dimensional live-action space to obtain two-dimensional coordinates of each space anchor point in the screen coordinate system; rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and a space anchor point rendering file corresponding to the target three-dimensional real-scene space, wherein the space anchor point rendering file at least comprises style information corresponding to each space anchor point and a corresponding fixed pixel value.
Further optionally, rendering the file by the spatial anchor includes: the system comprises an HTML file and a CSS file, wherein the HTML file at least comprises fixed pixel values corresponding to all spatial anchor points, and the CSS file at least comprises style information corresponding to all spatial anchor points; based on this, the processor 30b is specifically configured to, when rendering each spatial anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each spatial anchor point and the spatial anchor point rendering file corresponding to the target three-dimensional real-scene space,: rendering each initial anchor point model in a target three-dimensional real-scene space based on the two-dimensional coordinates of each spatial anchor point and by combining the fixed pixel values corresponding to each spatial anchor point in the HTML file; and carrying out style correction on each initial anchor model by utilizing the style information corresponding to each spatial anchor in the CSS file to obtain each spatial anchor.
Further, still include: a display 30c, configured to display a first perspective picture on a graphical user interface provided by the electronic device, where the first perspective picture is a local picture in the target three-dimensional real scene space and includes a first real scene object and a first spatial anchor point associated with the first real scene object; and responding to the trigger operation of the first spatial anchor point, and displaying an information popup on a graphical user interface, wherein the information popup contains detailed information of the first live-action object.
Further optionally, the display 30c, when configured to display the information popup on the graphical user interface in response to the triggering operation on the first spatial anchor, is specifically configured to: responding to touch operation on the first space anchor point, and acquiring two-dimensional coordinates of the first space anchor point in a first visual angle picture and screen position and size information of an information popup window; determining whether the first space anchor point is shielded by the information pop window according to the two-dimensional coordinate of the first space anchor point in the first visual angle picture and the screen position and size information of the information pop window; if the first space anchor point is shielded, the relative display position of the first space anchor point and the information popup window is adjusted, and the information popup window is displayed on the graphical user interface according to the adjusted relative display position, so that the first space anchor point is not shielded by the information popup window.
Further optionally, the display 30c is specifically configured to, when the display is configured to adjust the relative display positions of the first spatial anchor and the information popup, and display the information popup on the graphical user interface according to the adjusted relative display positions,: keeping the screen position of the information popup unchanged, rotating the display visual angle of the first visual angle picture to rotate the first space anchor point to the outside of the information popup, and displaying the information popup on a graphical user interface according to the screen position of the information popup; or, keeping the display visual angle of the first visual angle picture unchanged, adjusting the screen position of the information popup window to enable the information popup window not to shield the first space anchor point any more, and displaying the information popup window on the graphical user interface according to the adjusted screen position.
Further optionally, when the display 30c is configured to keep the screen position of the information pop-up window unchanged, and rotate the display view of the first view picture, so as to rotate the first spatial anchor point out of the information pop-up window, specifically, the display is configured to: determining the outer edge of a target on the information popup window and the distance from the first space anchor point to the outer edge of the target according to the screen position and size information of the information popup window and the two-dimensional coordinate of the first space anchor point in the first view angle picture, wherein the outer edge of the target refers to the outer edge of the information popup window closest to the first space anchor point; determining a rotation direction and a rotation angle in the rotation direction according to the position of the outer edge of the target and the distance from the first space anchor point to the outer edge of the target; and rotating the display visual angle of the first visual angle picture according to the rotating direction and the rotating angle so as to rotate the first space anchor point to the outside of the information popup window.
Further, the information popup window comprises a navigation window and a detail window, wherein the navigation window comprises live-action object navigation tags corresponding to the space anchors, and the detail window is used for displaying detail information of the currently triggered space anchors or live-action objects corresponding to the live-action object navigation tags; based on this, the display 30c, when configured to display the information popup on the graphical user interface in response to the trigger operation on the first spatial anchor, is specifically configured to: and responding to the trigger operation of the first spatial anchor point, displaying a navigation window, highlighting a first real-scene object navigation label corresponding to the first spatial anchor point in the navigation window, displaying a detail window, and displaying the detail information of the first real-scene object in the detail window.
Further, the display 30c is also used for: responding to the triggering operation of the second live-action object navigation tag, displaying the detail information of the second live-action object corresponding to the second live-action object navigation tag in a detail window, and switching a first visual angle picture currently displayed on the graphical user interface to a second visual angle picture where the second live-action object is located; the second visual angle picture at least comprises a second real scene object and a second space anchor point related to the second real scene object; the second live-action object navigation tag is another live-action object navigation tag different from the first live-action object navigation tag.
Further, the display 30c is also used for: responding to the visual angle switching operation on the graphical user interface, and determining a new visual angle; and displaying a third visual angle picture which is adapted to the new visual angle on the graphical user interface, wherein the third visual angle picture is a local picture in the target three-dimensional real scene space and comprises a third real scene object and a third space anchor point related to the third real scene object.
Furthermore, a navigation window and a detail window are displayed on the third visual angle picture, and the detail window displays the detail information of the first live-action object; based on this, the display 30c is also used to: and responding to the triggering operation of the third spatial anchor point, highlighting a third real-scene object navigation label corresponding to the third spatial anchor point in the navigation window, and displaying the detail information of the third real-scene object in the detail window.
Further optionally, the processor 30b is specifically configured to, when the processor 30b is configured to perform coordinate transformation on the three-dimensional space coordinate of each space anchor point in the target three-dimensional real space based on a conversion relationship between the world coordinate system in which the target three-dimensional real space is located and the screen coordinate system, to obtain the two-dimensional coordinate of each space anchor point in the screen coordinate system: multiplying the three-dimensional space coordinates of each space anchor point in the target three-dimensional real-scene space by a view matrix to obtain intermediate state coordinates of each space anchor point in an observation coordinate system, wherein the view matrix is determined according to the position and the orientation of a camera in the target three-dimensional real-scene space; multiplying the intermediate state coordinates of each spatial anchor point in the observation coordinate system by the projection matrix of the camera to obtain the three-dimensional coordinates of each spatial anchor point in the standard coordinate system; and according to the conversion relation between the standard coordinate system and the screen coordinate system, converting the x-axis coordinate and the y-axis coordinate of each space anchor point in the standard coordinate system into the width coordinate and the height coordinate of each space anchor point in the screen coordinate system respectively.
Further optionally, the processor 30b is specifically configured to, when rendering each spatial anchor point in the target three-dimensional real space based on the two-dimensional coordinates of each spatial anchor point in the screen coordinate system and the spatial anchor point rendering file corresponding to the target three-dimensional real space,: judging whether absolute values of x-axis coordinates, y-axis coordinates and z-axis coordinates of the spatial anchor points in a standard coordinate system are all smaller than the unit length of the standard coordinate system or not aiming at each spatial anchor point; judging whether the sum of the width coordinate and the height coordinate of the space anchor point in a screen coordinate system and the width and the height of the dom element is smaller than the width and the height of the screen respectively; if the judgment results of the judgment operations are yes, rendering the space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, and the style information and the fixed pixel value corresponding to the space anchor point defined in the space anchor point rendering file.
Further, as shown in fig. 3, the electronic device further includes: communication component 30d, power component 30e, audio component 30f, and the like. Only some of the components are schematically shown in fig. 3, and it is not meant that the electronic device comprises only the components shown in fig. 3. The electronic device of the embodiment may be implemented as an electronic device such as a desktop computer, a notebook computer, a smart phone, or an IOT device.
For the detailed implementation of the principle and each step of the specific implementation of each module or unit in the embodiment of the present application, reference may be made to the description of the same or corresponding step in the foregoing, and details are not repeated herein.
An exemplary embodiment of the present application further provides a computer readable storage medium storing a computer program/instruction, which when executed by a processor, causes the processor to implement the steps of the above-mentioned method, and will not be described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable coordinate determination device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable coordinate determination device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable coordinate determination apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable coordinate determination device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A spatial anchor point processing method is characterized by comprising the following steps:
receiving a space anchor point generation request initiated aiming at a target three-dimensional live-action space, wherein the space anchor point generation request comprises three-dimensional space coordinates of each space anchor point in the target three-dimensional live-action space;
based on the conversion relation between the world coordinate system where the target three-dimensional live-action space is located and the screen coordinate system, carrying out coordinate transformation on the three-dimensional space coordinates of each space anchor point in the target three-dimensional live-action space to obtain two-dimensional coordinates of each space anchor point in the screen coordinate system;
and rendering each space anchor point in the target three-dimensional real scene space based on the two-dimensional coordinates of each space anchor point in a screen coordinate system and a space anchor point rendering file corresponding to the target three-dimensional real scene space, wherein the space anchor point rendering file at least comprises style information corresponding to each space anchor point and a corresponding fixed pixel value.
2. The method of claim 1, wherein rendering the file by the spatial anchor point comprises: the system comprises an HTML file and a CSS file, wherein the HTML file at least comprises fixed pixel values corresponding to all spatial anchor points, and the CSS file at least comprises style information corresponding to all spatial anchor points;
rendering each space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each space anchor point and a space anchor point rendering file corresponding to the target three-dimensional real-scene space, and the method comprises the following steps:
rendering each initial anchor point model in a target three-dimensional real-scene space by combining fixed pixel values corresponding to each space anchor point in the HTML file based on the two-dimensional coordinates of each space anchor point;
and performing style correction on each initial anchor point model by using style information corresponding to each spatial anchor point in the CSS file to obtain each spatial anchor point.
3. The method of claim 1, further comprising:
displaying a first visual angle picture on a graphical user interface provided by electronic equipment, wherein the first visual angle picture is a local picture in the target three-dimensional real scene space and comprises a first real scene object and a first space anchor point related to the first real scene object;
and responding to the trigger operation of the first spatial anchor point, and displaying an information popup on the graphical user interface, wherein the information popup contains detailed information of a first real scene object.
4. The method of claim 3, wherein displaying an information popup on the graphical user interface in response to a triggering operation of the first spatial anchor comprises:
responding to touch operation of a first space anchor point, and acquiring two-dimensional coordinates of the first space anchor point in a first visual angle picture and screen position and size information of an information popup window;
determining whether the first space anchor point is shielded by the information pop window according to the two-dimensional coordinate of the first space anchor point in the first visual angle picture and the screen position and size information of the information pop window;
if the first space anchor point is shielded, the relative display position of the first space anchor point and the information popup window is adjusted, and the information popup window is displayed on the graphical user interface according to the adjusted relative display position, so that the first space anchor point is not shielded by the information popup window.
5. The method of claim 4, wherein adjusting the relative display positions of the first spatial anchor point and the information popup, and displaying the information popup on the graphical user interface according to the adjusted relative display positions comprises:
keeping the screen position of the information popup unchanged, rotating the display visual angle of the first visual angle picture to rotate the first space anchor point to the outside of the information popup, and displaying the information popup on the graphical user interface according to the screen position of the information popup;
alternatively, the first and second electrodes may be,
and keeping the display visual angle of the first visual angle picture unchanged, adjusting the screen position of the information popup window to enable the information popup window not to shield the first space anchor point any more, and displaying the information popup window on the graphical user interface according to the adjusted screen position.
6. The method of claim 5, wherein rotating the display view of the first view to rotate the first spatial anchor out of the infopopup while maintaining the screen position of the infopopup comprises:
determining a target outer edge on the information popup window and a distance from the first space anchor point to the target outer edge according to the screen position and size information of the information popup window and the two-dimensional coordinate of the first space anchor point in a first visual angle picture, wherein the target outer edge refers to the outer edge of the information popup window closest to the first space anchor point;
determining a rotation direction and a rotation angle in the rotation direction according to the position of the target outer edge and the distance from the first space anchor point to the target outer edge;
and rotating the display visual angle of the first visual angle picture according to the rotating direction and the rotating angle so as to rotate the first space anchor point to the outside of the information popup window.
7. The method according to any one of claims 3 to 6, wherein the information popup comprises a navigation window and a detail window, the navigation window comprises a live-action object navigation tag corresponding to each spatial anchor point, and the detail window is used for displaying the detail information of the live-action object corresponding to the currently triggered spatial anchor point or live-action object navigation tag;
in response to a trigger operation on the first spatial anchor point, displaying an information popup on the graphical user interface, comprising:
and responding to the trigger operation of the first spatial anchor point, displaying the navigation window, highlighting a first real-scene object navigation label corresponding to the first spatial anchor point in the navigation window, displaying the detail window, and displaying the detail information of the first real-scene object in the detail window.
8. The method of claim 7, further comprising:
responding to the triggering operation of a second live-action object navigation tag, displaying the detail information of a second live-action object corresponding to the second live-action object navigation tag in the detail window, and switching a first visual angle picture currently displayed on the graphical user interface to a second visual angle picture where the second live-action object is located;
the second visual angle picture at least comprises a second real scene object and a second space anchor point related to the second real scene object; the second live-action object navigation tag is other live-action object navigation tags different from the first live-action object navigation tag.
9. The method of claim 7, further comprising:
responding to the visual angle switching operation on the graphical user interface, and determining a new visual angle;
and displaying a third visual angle picture matched with the new visual angle on the graphical user interface, wherein the third visual angle picture is a local picture in the target three-dimensional real scene space and comprises a third real scene object and a third space anchor point related to the third real scene object.
10. The method according to claim 9, wherein the navigation window and the detail window are displayed on the third perspective screen, and the detail window displays therein detail information of the first real object;
the method further comprises the following steps: and responding to the triggering operation of the third spatial anchor point, highlighting a third real-scene object navigation label corresponding to the third spatial anchor point in the navigation window, and displaying the detail information of the third real-scene object in the detail window.
11. The method according to any one of claims 1 to 6, wherein the coordinate transformation of the three-dimensional space coordinate of each spatial anchor point in the target three-dimensional real scene space is performed based on a transformation relation between a world coordinate system in which the target three-dimensional real scene space is located and a screen coordinate system, so as to obtain the two-dimensional coordinate of each spatial anchor point in the screen coordinate system, and the method comprises the following steps:
multiplying the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space by a view matrix to obtain intermediate state coordinates of each space anchor point in an observation coordinate system, wherein the view matrix is determined according to the position and the orientation of a camera in the target three-dimensional real scene space;
multiplying the intermediate state coordinates of each spatial anchor point in an observation coordinate system by the projection matrix of the camera to obtain the three-dimensional coordinates of each spatial anchor point in a standard coordinate system;
and according to the conversion relation between the standard coordinate system and the screen coordinate system, converting the x-axis coordinate and the y-axis coordinate of each space anchor point in the standard coordinate system into the width coordinate and the height coordinate of each space anchor point in the screen coordinate system respectively.
12. The method of claim 11, wherein rendering each spatial anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinates of each spatial anchor point in the screen coordinate system and a spatial anchor point rendering file corresponding to the target three-dimensional real-scene space comprises:
judging whether absolute values of x-axis coordinates, y-axis coordinates and z-axis coordinates of the spatial anchor points in a standard coordinate system are all smaller than the unit length of the standard coordinate system or not aiming at each spatial anchor point; and
judging whether the width coordinate and the height coordinate of the space anchor point in a screen coordinate system are respectively smaller than the width and the height of a screen;
if the judgment results of the judgment operations are yes, rendering the space anchor point in the target three-dimensional real-scene space based on the two-dimensional coordinate of the space anchor point in the screen coordinate system, and the style information and the fixed pixel value corresponding to the space anchor point defined in the space anchor point rendering file.
13. A spatial anchor point processing apparatus, comprising:
the system comprises a receiving module, a generating module and a processing module, wherein the receiving module is used for receiving a space anchor point generating request initiated aiming at a target three-dimensional real scene space, and the space anchor point generating request comprises three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space;
the coordinate transformation module is used for carrying out coordinate transformation on the three-dimensional space coordinates of each space anchor point in the target three-dimensional real scene space based on the transformation relation between the world coordinate system where the target three-dimensional real scene space is located and the screen coordinate system to obtain the two-dimensional coordinates of each space anchor point in the screen coordinate system;
and the anchor point rendering module is used for rendering each space anchor point in the target three-dimensional real scene space based on the two-dimensional coordinates of each space anchor point in the screen coordinate system and a space anchor point rendering file corresponding to the target three-dimensional real scene space, wherein the space anchor point rendering file at least comprises style information corresponding to each space anchor point and a corresponding fixed pixel value.
14. An electronic device, comprising: a memory and a processor; the memory is adapted to store a computer program, and the processor is coupled to the memory for executing the computer program for implementing the steps of the method of any of claims 1-12.
15. A computer-readable storage medium storing a computer program/instructions, which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1-12.
CN202210744786.2A 2022-06-27 2022-06-27 Spatial anchor point processing method and device, electronic equipment and storage medium Active CN114935977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210744786.2A CN114935977B (en) 2022-06-27 2022-06-27 Spatial anchor point processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210744786.2A CN114935977B (en) 2022-06-27 2022-06-27 Spatial anchor point processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114935977A true CN114935977A (en) 2022-08-23
CN114935977B CN114935977B (en) 2023-04-07

Family

ID=82868689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210744786.2A Active CN114935977B (en) 2022-06-27 2022-06-27 Spatial anchor point processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114935977B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120011430A1 (en) * 2010-07-09 2012-01-12 Stefan Parker Infinite Scrolling
CN104915373A (en) * 2015-04-27 2015-09-16 北京大学深圳研究生院 Three-dimensional webpage design method and device
CN106663338A (en) * 2014-08-01 2017-05-10 索尼公司 Information processing device, information processing method, and program
CN107533360A (en) * 2015-12-07 2018-01-02 华为技术有限公司 A kind of method for showing, handling and relevant apparatus
CN107797801A (en) * 2017-10-20 2018-03-13 江苏电力信息技术有限公司 A kind of adaptation method based on a variety of interface of mobile terminal
WO2021021624A1 (en) * 2019-07-26 2021-02-04 Patnotate Llc Technologies for content analysis
CN113318428A (en) * 2021-05-25 2021-08-31 网易(杭州)网络有限公司 Game display control method, non-volatile storage medium, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120011430A1 (en) * 2010-07-09 2012-01-12 Stefan Parker Infinite Scrolling
CN106663338A (en) * 2014-08-01 2017-05-10 索尼公司 Information processing device, information processing method, and program
CN104915373A (en) * 2015-04-27 2015-09-16 北京大学深圳研究生院 Three-dimensional webpage design method and device
CN107533360A (en) * 2015-12-07 2018-01-02 华为技术有限公司 A kind of method for showing, handling and relevant apparatus
CN107797801A (en) * 2017-10-20 2018-03-13 江苏电力信息技术有限公司 A kind of adaptation method based on a variety of interface of mobile terminal
WO2021021624A1 (en) * 2019-07-26 2021-02-04 Patnotate Llc Technologies for content analysis
CN113318428A (en) * 2021-05-25 2021-08-31 网易(杭州)网络有限公司 Game display control method, non-volatile storage medium, and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WOLIUQIANGDONG: "《于 HTML5 的 WebGL 3D 智能楼宇监控系统》", 《HTTPS://BLOG.CSDN.NET/WOLIUQIANGDONG/ARTICLE/DETAILS/122809084?OPS_REQUEST_MISC=&REQUEST_ID=&BIZ_ID=102&UTM_TERM=FIXSIZEONSCREEN&UTM_MEDIUM=DISTRIBUTE.PC_SEARCH_RESULT.NONE-TASK-BLOG-2~ALL~SOBAIDUWEB~DEFAULT-3-122809084.NONECASE&SPM=1018.2226.3001.4 *

Also Published As

Publication number Publication date
CN114935977B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
US20080077871A1 (en) Detail-in-context lenses for interacting with objects in digital image presentations
US7061498B2 (en) Screen display processing apparatus, screen display processing method and computer program
US10839572B2 (en) Contextual virtual reality interaction
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
CN111803945B (en) Interface rendering method and device, electronic equipment and storage medium
CN112907760B (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
EP3832459A1 (en) Graphic drawing method and apparatus, device, and storage medium
CN114115525B (en) Information display method, device, equipment and storage medium
CN110570501A (en) Line animation drawing method and equipment, storage medium and electronic equipment
US9092912B1 (en) Apparatus and method for parallax, panorama and focus pull computer graphics
CN111290680B (en) List display method, device, terminal and storage medium
CN107491289B (en) Window rendering method and device
CN112954441A (en) Video editing and playing method, device, equipment and medium
CN107861711B (en) Page adaptation method and device
CN113900606B (en) Information display method, equipment and storage medium
US6226009B1 (en) Display techniques for three dimensional virtual reality
CN114935977B (en) Spatial anchor point processing method and device, electronic equipment and storage medium
CN106681590B (en) Method and device for displaying screen content of driving recording device
CN111414104B (en) Electronic map local display method and device
CN111127607A (en) Animation generation method, device, equipment and medium
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
Letić et al. Real-time map projection in virtual reality using WebVR
CN115619904A (en) Image processing method, device and equipment
CN113742507A (en) Method for three-dimensionally displaying an article and associated device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant