CN114445525A - Virtual object display method and device and electronic equipment - Google Patents

Virtual object display method and device and electronic equipment Download PDF

Info

Publication number
CN114445525A
CN114445525A CN202210118043.4A CN202210118043A CN114445525A CN 114445525 A CN114445525 A CN 114445525A CN 202210118043 A CN202210118043 A CN 202210118043A CN 114445525 A CN114445525 A CN 114445525A
Authority
CN
China
Prior art keywords
virtual
virtual image
stereoscopic
point
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210118043.4A
Other languages
Chinese (zh)
Inventor
赵弯弯
张冠南
陈祖阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202210118043.4A priority Critical patent/CN114445525A/en
Publication of CN114445525A publication Critical patent/CN114445525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a display method and device of a virtual object and electronic equipment, wherein the method comprises the following steps: acquiring image data of a virtual object to be displayed, wherein the image data comprises a three-dimensional virtual image of the virtual object and marking information related to each marking point marked on the three-dimensional virtual image, and at least one marking point is marked on the three-dimensional virtual image; displaying a stereoscopic virtual image of a virtual object; determining at least one first target marking point which can be presented in a visual range in the stereoscopic virtual image, wherein the first target marking point belongs to the at least one marking point; and displaying the labeling information associated with the first target mark point. According to the scheme of the application, a user does not need to independently switch to the information introduction page to inquire the information introduction related to the object.

Description

Virtual object display method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for displaying a virtual object, and an electronic device.
Background
In some e-commerce or goods display platforms, in order to enable a user to more fully and realistically understand objects such as goods or goods, a three-dimensional virtual image display of the objects is often provided.
However, a three-dimensional virtual image of an object can only show the appearance of the object. If a user wants to know the detailed introduction information of a certain part of an object, the three-dimensional virtual image of the object displayed by the e-commerce or the article display platform still needs to be switched into the information introduction area of the object, so that the user can not know the detailed information of the object intuitively and efficiently.
Disclosure of Invention
The application provides a display method and device of a virtual object and electronic equipment.
The display method of the virtual object comprises the following steps:
acquiring image data of a virtual object to be displayed, wherein the image data comprises a three-dimensional virtual image of the virtual object and marking information related to each marking point marked on the three-dimensional virtual image, and at least one marking point is marked on the three-dimensional virtual image;
displaying a stereoscopic virtual image of the virtual object;
determining at least one first target mark point which can be presented in a visual range in the stereoscopic virtual image, wherein the first target mark point belongs to the at least one mark point;
and displaying the labeling information associated with the first target mark point.
In a possible implementation manner, the displaying the labeling information associated with the first target marker point includes:
displaying the marking information associated with the first target marking point in a first display state;
when the labeling information associated with the first target mark point is displayed, the method further comprises the following steps:
and displaying other mark points except the first target mark point in the stereoscopic virtual image and associated mark information thereof in a second display state, wherein the display effect of the second display state is different from that of the first display state.
In yet another possible implementation manner, the displaying a stereoscopic virtual image of the virtual object includes:
and displaying a stereoscopic virtual image of the virtual object based on a first camera coordinate system corresponding to the set initial spatial position of the virtual camera.
In yet another possible implementation manner, the determining at least one first target mark point that can be presented in a visible range in the stereoscopic virtual image includes:
and determining at least one first target mark point which can be presented in a visual range on the stereoscopic virtual image by taking the initial space position of the virtual camera as a visual point position.
In another possible implementation manner, after the displaying the stereoscopic virtual image of the virtual object, the method further includes:
obtaining an adjustment operation for adjusting the display effect of the virtual object;
adjusting the displayed stereoscopic virtual image of the virtual object in response to the adjustment operation;
determining at least one second target marking point which can be presented in a visual range in the adjusted stereoscopic virtual image of the virtual object, wherein the second target marking point belongs to the at least one marking point;
and displaying the labeling information associated with the second target mark point.
In another possible implementation manner, the adjusting the displayed stereoscopic virtual image of the virtual object in response to the adjusting operation includes:
responding to the adjustment operation, adjusting the position of a set virtual camera in a virtual scene, and determining the adjusted space position of the virtual camera in the virtual scene, wherein the virtual scene is a virtual space to which the stereoscopic virtual image belongs;
and mapping the stereoscopic virtual image of the virtual object to a display area according to a second camera coordinate system constructed by the adjusted spatial position of the virtual camera so as to adjust the displayed stereoscopic virtual image of the virtual object.
In yet another possible implementation manner, the determining at least one second target mark point that can be present in a visible range in the adjusted stereoscopic virtual image of the virtual object includes:
and determining at least one second target mark point in a visual range on the three-dimensional virtual image by taking the adjusted space position of the virtual camera as a visual point position.
In yet another possible implementation manner, the displaying a stereoscopic virtual image of the virtual object based on a first camera coordinate system corresponding to the set initial spatial position of the virtual camera includes:
determining screen coordinates of each pixel point in the three-dimensional virtual image of the virtual object mapped to a screen coordinate system of a display area according to a first camera coordinate system established by the initial space position of the virtual camera;
and mapping the stereoscopic virtual image of the virtual object to the display area by combining the screen coordinates of each pixel point in the stereoscopic virtual image of the virtual object.
Wherein, a display device of virtual object includes:
the data acquisition unit is used for acquiring image data of a virtual object to be displayed, wherein the image data comprises a three-dimensional virtual image of the virtual object and label information related to each label point marked on the three-dimensional virtual image, and at least one label point is marked on the three-dimensional virtual image;
an image display unit for displaying a stereoscopic virtual image of the virtual object;
the target determining unit is used for determining at least one first target marking point which can be presented in a visual range in the stereoscopic virtual image, and the first target marking point belongs to the at least one marking point;
and the annotation display unit is used for displaying the annotation information associated with the first target mark point.
The electronic equipment at least comprises a memory and a processor;
the processor is configured to execute a display method of a virtual object according to any one of the present application;
the memory is used for storing programs needed by the processor to execute the operation.
According to the scheme, at least one mark point is marked in the three-dimensional virtual image of the virtual object in advance, and each mark point is associated with mark information. On the basis, while the stereoscopic virtual image of the virtual object is displayed, the labeling information associated with each target mark point capable of being presented in a visible range in the stereoscopic virtual image is displayed, so that a user can visually know the labeling information associated with some visible mark points in the presented stereoscopic virtual image while watching the output stereoscopic virtual image of the stereoscopic object, and further know the information introduction related to the object part represented by the mark points through the labeling information associated with the mark points, and the user does not need to independently switch to an information introduction page to inquire the related information introduction.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for displaying a virtual object according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a stereoscopic virtual image of a virtual object displayed in the embodiment of the present application;
fig. 3 is another schematic view of a stereoscopic virtual image of a virtual object displayed in the embodiment of the present application;
fig. 4 is a schematic flowchart of a method for displaying a virtual object according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a principle of determining a target mark point by using a spatial position of a virtual camera as a viewpoint position in the embodiment of the present application;
fig. 6 is a schematic flowchart of a method for displaying a virtual object according to an embodiment of the present application;
fig. 7 is a schematic flowchart of creating a mark point and annotation information for a stereoscopic virtual image of a virtual object according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a display device of a virtual object according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a composition architecture of an electronic device according to an embodiment of the present disclosure.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The scheme of the application can be applied to a commercial platform or an article display platform to display the three-dimensional virtual image of the object through the virtual scene, so that a user can visually and conveniently know the condition of the real object according to the displayed three-dimensional virtual image.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
As shown in fig. 1, which shows a flowchart of a method for displaying a virtual object provided in an embodiment of the present application, the method of the present embodiment may be applied to an electronic device, where the electronic device may be a server, such as a server of an e-commerce platform or an article display platform, and the like. The electronic device may also be a terminal device, such as a mobile phone, a notebook computer, and the like, and the terminal device may establish a communication connection with a server providing image data of the virtual object, for example, the terminal device may establish a communication connection with an e-commerce platform or an article display platform.
The scheme of the embodiment can comprise the following steps:
s101, image data of a virtual object to be displayed is obtained.
The image data comprises a three-dimensional virtual image of the virtual object and labeling information related to each labeling point marked on the three-dimensional virtual image.
It is understood that the dimensions of the stereoscopic virtual image may vary depending on the virtual scene in which the virtual object is located. For example, for a three-dimensional virtual scene, the stereoscopic virtual image of the virtual object may be a three-dimensional virtual image.
In one possible scenario, the virtual object may correspond to a real physical object. Accordingly, the stereoscopic virtual image may be a stereoscopic model image constructed in a virtual scene to characterize the real physical object. For example, the stereoscopic virtual image of the virtual object is digital twin information of the physical object, that is, the stereoscopic virtual object of the virtual object is a stereoscopic virtual image created for the physical object by using a digital twin technique. The composition structure and the shape of the stereoscopic virtual image of the virtual object are consistent with the composition structure and the shape of the real physical object, and the size of the stereoscopic virtual object of the virtual object can be in a set proportion with the size of the real physical object.
Of course, the stereoscopic virtual image of the virtual object may not be the mapping of the real physical object in the virtual scene, but may be a stereoscopic virtual image of an object model artificially constructed. For example, assuming that a robot needs to be developed in the future, before the robot is constructed, a stereoscopic virtual image of the robot can be constructed, so that information of the robot to be produced can be shown or explained to people through the stereoscopic virtual image.
Wherein, at least one mark point is marked on the three-dimensional virtual image.
Wherein, the marking point on the stereoscopic virtual image can be selected and marked on the virtual object in the process of creating the virtual object or the virtual scene containing the virtual object. The position of the mark point in the stereoscopic virtual image of the virtual object can be set as required, and a part of area mark point which needs to introduce information to the virtual object is generally selected so as to associate introduction information of a corresponding part with the mark point.
It can be seen that the marker points in the stereoscopic virtual image may represent a part or a component of the virtual object.
For example, a virtual object is taken as a virtual computer. The virtual computer is composed of a screen, a screen panel, a keyboard, a backboard bearing the keyboard and the like, so that when the virtual computer is displayed, introduction information of one or more of the screen, the screen panel, the keyboard, the backboard and the like can be presented, and mark points can be marked on the image areas of the screen, the screen panel, the keyboard, the backboard and the like of the three-dimensional virtual image of the virtual computer respectively.
The marking information associated with the marking point on the stereoscopic virtual object also needs to be preset and associated with the marking point. The specific processes of marking the mark points for the stereoscopic virtual object and creating the mark information associated with the mark points can be set as required, and are not described herein again.
The labeling information associated with the mark point may be: description information of the target portion of the virtual object characterized by the marker points. For example, the description information may be information of a composition structure, a function, a manufacturer to which the description information belongs, and the like of the target portion of the virtual object corresponding to the mark point, and is not particularly limited.
It is understood that, when the present application is applied to a server, the server obtains, from stored data, image data of a stereoscopic virtual image of a virtual object to be currently output, for subsequent output to a display screen or a specific display area of a terminal device.
In the case where the present application is applied to a terminal device, the terminal device may obtain image data to be output from a server in real time, or obtain image data that has been stored in the terminal device in advance, which is not limited thereto.
And S102, displaying the stereoscopic virtual image of the virtual object.
It can be understood that the process of displaying the stereoscopic virtual image of the virtual object is actually a process of mapping the stereoscopic virtual image to a two-dimensional display area, that is, a process of converting each pixel point in the stereoscopic virtual image from a world coordinate system to a screen coordinate system, and the specific process is not limited.
In this application, the virtual object may be displayed on a display screen of the electronic device, or may be projected to a display area outside the electronic device or inside the display device through the electronic device, which is not limited to this.
For example, taking an electronic device as a terminal device as an example, the terminal device may output the stereoscopic virtual image to its display screen, or may output the stereoscopic virtual image to another display screen or project the stereoscopic virtual image onto a designated display area (e.g., a display area of a wall or a curtain).
For another example, if the electronic device is a server, the server may output the stereoscopic virtual image to the terminal device to present the stereoscopic virtual image on a display screen of the terminal device.
S103, determining at least one first target mark point which can be presented in a visual range in the stereoscopic virtual image.
The first target mark point capable of being presented in the visual range in the stereoscopic virtual image is a mark point which appears at the front end of the display area and can be seen by a user after the stereoscopic virtual image is displayed. In order to facilitate distinguishing from the mark point determined after the stereoscopic virtual image of the virtual object is subsequently adjusted, the mark point determined in step S103 is referred to as a first target mark point.
The first target mark point belongs to the at least one mark point.
It is understood that the operation of determining the target mark point may be performed after step S102, or may be performed in synchronization with step S102 or before step S102. Alternatively, this step may be performed before step S102 or in synchronization with step S102, so that the operations of the subsequent step S104 may be performed in synchronization while the stereoscopic virtual image is displayed.
For example, in one possible implementation manner, the application may determine, in combination with a current viewpoint position in the virtual scene, at least one first target mark point that may be present in a visible range on the stereoscopic virtual image.
The viewpoint position in the virtual scene is a virtual viewpoint in the virtual scene, and the viewpoint position is a reference point for mapping a stereoscopic virtual image of a virtual object in the virtual scene to a two-dimensional screen plane.
As an alternative, a virtual camera may be set in the virtual scene, and the initial position of the virtual camera is fixed, but the camera position of the virtual camera may change continuously as the user performs adjustment operations such as rotation and zooming on the virtual scene. On the basis, the current space position of the virtual camera in the virtual scene can be used as a viewpoint position, and at least one first target mark point which can be presented in a visual range in the stereoscopic virtual image is determined.
It should be noted that the virtual camera is not visible in the virtual scene, and the virtual camera is only a virtual viewpoint reference object for determining the viewpoint position.
It can be understood that, after the image data of the virtual object is obtained, when the stereoscopic virtual image of the virtual object is output for the first time, the camera position of the virtual camera in the virtual scene is the set initial spatial position, and the current spatial position of the virtual camera is the initial spatial position of the virtual camera.
In the above possible implementation manner, the display may be performed while or before the stereoscopic virtual image of the virtual object is displayed, so that the first target mark point may be determined while or before the stereoscopic virtual image is displayed, which is beneficial to displaying the stereoscopic virtual image while synchronously displaying the annotation information of the first target mark point.
Of course, in practical applications, after the stereoscopic virtual image is displayed, at least one first target mark point in the stereoscopic virtual image within the visible range may be determined according to the mapping of the stereoscopic virtual image in the display area.
And S104, displaying the labeling information associated with the first target mark point.
Wherein, the step S104 can be executed after the step S102; step S102 may be executed and step S104 may be executed simultaneously, so that when the user views the displayed stereoscopic virtual image, the annotation information associated with each first target marker may be viewed synchronously.
It can be understood that, the method and the device can display the three-dimensional virtual image of the virtual object and simultaneously display the label information of each first target label point in the three-dimensional virtual image within the visible range, so that a user can visually know the label information corresponding to each label point in the display area presented in the three-dimensional virtual image while seeing the three-dimensional virtual image, and further can directly know the relevant information of the corresponding part of the virtual object represented by the label point through the label information of the label point.
The way of displaying the annotation information may be various. For example, in one possible implementation, for each first target mark point, display coordinates of the first target mark point in the display area, which may also be referred to as screen coordinates, may be determined. On the basis, the marking information related to the first target mark point can be displayed in the setting area corresponding to the display coordinate of the first target mark point.
For example, the setting area corresponding to the display coordinate of the first target punctuation is a setting area located at the right side of the display coordinate and adjacent to the display coordinate.
Fig. 2 is a schematic diagram illustrating a first target mark point and annotation information associated with the first target mark point in a stereoscopic virtual image of a virtual object displayed in the present application.
Fig. 2 illustrates a virtual object as a virtual notebook computer. As can be seen from fig. 2, the displayed stereoscopic virtual image of the virtual notebook computer can see the plane of the body where the display screen and the keyboard of the virtual notebook computer are located. In this case, the mark point 201 marked for the display screen of the virtual notebook computer and the mark point 202 marked for the body of the virtual notebook computer where the keyboard is located are both in a visible state. Accordingly, the first annotation information 203 associated with the marker 201 and the second annotation information 204 associated with the marker 202 are displayed.
The first annotation information 203 is displayed at the lower right of the marking point 201, and the first annotation information can be related to the resolution and the size of the display screen of the notebook computer.
And a second annotation 204, which is a related introduction to the performance and type of the fuselage, is displayed to the lower right of the marker point 202.
As can be seen from fig. 2, not only can the appearances of the real notebook computers corresponding to the virtual notebook computers at various angles be intuitively understood through the stereoscopic virtual image of the virtual notebook computer, but also the detailed information of the relevant components of the notebook computer can be synchronously understood.
According to the scheme, at least one mark point is marked in the three-dimensional virtual image of the virtual object in advance, and each mark point is associated with mark information. On the basis, while the stereoscopic virtual image of the virtual object is displayed, the labeling information associated with each target mark point capable of being presented in a visible range in the stereoscopic virtual image can be displayed, so that a user can visually know the labeling information associated with some visible mark points in the presented stereoscopic virtual image while watching the output stereoscopic virtual image of the stereoscopic object, and further know the information introduction related to the mark points through the labeling information associated with the mark points, and the user does not need to independently switch to an information introduction page to inquire the related information introduction.
In the embodiment of the application, while displaying the annotation information associated with each first target marker in the visible range in the stereoscopic virtual image of the virtual object, other markers not belonging to the first target marker in the stereoscopic virtual image and annotation information associated with other markers can be hidden, and other display effects different from the annotation information associated with the first target marker can be used to present the annotation information associated with other markers.
In one possible implementation manner, the annotation information associated with the first target marker point may be displayed in a first display state. And simultaneously, displaying other mark points except the first target mark point in the three-dimensional virtual image and associated mark information thereof in a second display state. The display effect of the second display state is different from the display effect of the first display state.
In one example, the display definition of the first display state is higher than that of the second display state, for example, the first display state may be a state of displaying information regularly, and the user may clearly see the first target mark point and its associated marking information displayed in the first display state. When the other mark points and the associated marking information thereof are displayed in the second display state, the other mark points and the associated marking information thereof can be in an invisible state or a fuzzy display state.
See, for example, fig. 3 for a schematic illustration of the display of virtual objects in a virtual scene.
As can be seen from fig. 3, in the stereoscopic virtual image of the virtual notebook computer shown in fig. 3, the body where the keyboard of the virtual notebook computer is located is visible, and the mark point 301 marked on the body is also within the visible range. In this case, the marker point 301 on the body and its associated marking information 302 can be normally displayed.
The screen of the virtual notebook computer is invisible, so that the mark points marked on the screen of the virtual notebook computer are not in the visible range. Accordingly, the marker 303 and its associated annotation information 304 on the screen in fig. 3 are in a blurred display state. Comparing the mark point 301 and the attention information 302 thereof with the mark point associated annotation information 203 of the screen in fig. 2, it can be seen that the mark point 303 of the screen in fig. 3 and the associated annotation information 304 thereof are displayed with lower definition.
In yet another example, the first display state and the second display state may be different display colors, for example, the first display state represents the first target marker and its associated label information in a bright color that is more attractive to the eye, and the second display state represents other markers than the first target marker and its associated label information in a darker color that is less attractive to the eye.
Of course, the above two possible implementation manners for realizing that the first display state and the second display state have different display effects are used, and other implementation possibilities may also exist in practical application, which is not limited in the present application.
In order to facilitate understanding of the solution of the present application, a possible implementation manner of displaying a stereoscopic virtual image of a virtual object and determining a first target mark point is taken as an example to describe the solution of the present application.
As shown in fig. 4, which shows another flowchart of the method for displaying a virtual object provided in the embodiment of the present application, the method of the present embodiment may include:
s401, obtaining image data of a virtual object to be displayed.
The image data comprises a three-dimensional virtual image of the virtual object and labeling information related to each labeling point marked on the three-dimensional virtual image. At least one marking point is marked on the three-dimensional virtual image.
This step can be referred to the related description of the previous embodiment, and is not described herein again.
S402, displaying a stereoscopic virtual image of the virtual object based on the first camera coordinate system corresponding to the set initial spatial position of the virtual camera.
In this embodiment, a virtual camera is set in a virtual scene to which a virtual object belongs, and the virtual camera has a set initial spatial position.
In this case, the present application may construct a camera coordinate system in combination with the initial spatial position of the virtual camera, and for the sake of convenience of distinction from the subsequently constructed camera coordinate system, the camera coordinate system constructed based on the initial spatial position of the virtual camera is referred to herein as a first camera coordinate system. Accordingly, the stereoscopic virtual image of the virtual object can be displayed based on the first camera coordinate system, so that the stereoscopic virtual image can be mapped into a two-dimensional display area.
In a possible implementation manner, on the basis of constructing the first camera coordinate system by using the initial spatial position of the virtual camera, the screen coordinates of each pixel point in the stereoscopic virtual image of the virtual object mapped to the screen coordinate system of the display area may be determined according to the first camera coordinate system. Correspondingly, the three-dimensional virtual image of the virtual object is mapped to the display area by combining the screen coordinates of each pixel point in the three-dimensional virtual image of the virtual object.
The screen coordinates of each pixel point of the three-dimensional virtual image of the virtual object are determined, and the process of mapping the world coordinates of each pixel point in the three-dimensional virtual image of the virtual object in the world coordinate system to the screen coordinates in the screen coordinate system is actually performed.
The mapping process from the world coordinate system to the screen coordinate system involves: and mapping each pixel point of the three-dimensional virtual image to a first camera coordinate system from a world coordinate system, mapping the pixel points to standardized equipment coordinates from the first camera coordinate system, and converting the standardized equipment coordinates into screen coordinates in a screen coordinate system.
For example, the specific mapping process may include the following transformations:
firstly, a first transformation matrix which is transformed from a world coordinate system to a first camera coordinate system is determined by combining the first camera coordinate system and the world coordinate system of a virtual scene where a virtual object is located, and the world coordinate of each pixel point in a three-dimensional virtual image of the virtual object in the world coordinate system is transformed to the coordinate under the first camera coordinate system according to the first transformation matrix.
Second, a second transformation matrix is determined from the first camera coordinate system to the standardized device coordinate system in conjunction with the first camera coordinate system. And converting the coordinates of each pixel point in the three-dimensional virtual image of the virtual object in the first camera coordinate system into the coordinates of the standardized equipment based on the second transformation matrix.
And finally, converting the standardized equipment coordinates of each pixel point in the three-dimensional virtual image of the virtual object into screen coordinates according to the mapping relation from the standardized equipment coordinates to the screen coordinates.
For example, in the case where the normalized device coordinates (x1, y1, z1) of a pixel point in a stereoscopic virtual image of a virtual object are known, the screen coordinates (x2, y2) can be converted by the following formula:
x2=(0.5+x1/2)*w;
y2=(0.5-y1/2)*h;
where w is a screen width, that is, a width of a display area for displaying a stereoscopic virtual image, and if a stereoscopic virtual image of a virtual object is displayed in a display screen, the screen width may be a display width of the display screen; if the stereoscopic virtual image of the virtual object is projected to other types of display areas by means of projection and the like, the screen width is the width of the display area for displaying the projected image; h is a screen height, which may be, similarly, a display height of the display screen or a height of the display area.
Of course, in practical applications, in combination with the first camera coordinate system corresponding to the virtual camera, there may be a plurality of implementation possibilities for the specific implementation of mapping the stereoscopic virtual image of the virtual object to the screen coordinate system, which is not limited in the present application.
S403, determining at least one first target mark point which can be presented in a visual range on the stereoscopic virtual image by taking the initial space position of the virtual camera as a visual point position.
The initial spatial position of the virtual camera is taken as the viewpoint position, that is, the initial spatial position of the virtual camera is taken as the user viewpoint. Correspondingly, the at least one first target mark point is a first target mark point which is within a visual range in the stereoscopic virtual image under the condition that the virtual object is observed from the viewpoint position in the virtual scene.
In a possible implementation manner, for each mark point in the stereoscopic virtual image of the virtual object, a ray from the initial spatial position of the virtual camera to the mark point may be respectively constructed, and if the first intersection point of the ray and the stereoscopic virtual image of the virtual object is the mark point, the mark point is a mark point that can be presented in the visible range; if the first intersection point of the ray and the stereoscopic virtual image of the virtual object is not the mark point, the mark point will not be in the visible range after the stereoscopic virtual image of the virtual object is displayed.
For example, see fig. 5, which shows a schematic diagram of determining target mark points within a visual range with the spatial position of the virtual camera as the viewpoint position.
Fig. 5 illustrates a notebook computer in which a virtual object is a virtual object. In the three-dimensional virtual image of the notebook computer, a mark point 501 is marked on the screen of the notebook computer.
Under the condition that the position of a virtual camera 502 is known, a virtual camera 502 is constructed in a virtual scene where a virtual object is located, and a ray is sent from the position of the virtual camera 502 to the marker point 501, so that the ray passes through the marker point 501 and penetrates into the screen of the notebook computer, therefore, the marker point 501 is a first intersection point of the ray and the notebook computer, and therefore, the marker point 501 is a marker point which can be within a visible range.
Meanwhile, a screen coordinate system and a space coordinate system are marked in fig. 5, for example, two coordinate axes X and Y drawn from the origin (0, 0) constitute the screen coordinate system, and the central point of the virtual scene where the virtual notebook computer is located is a world coordinate system having an X coordinate axis, a Y coordinate axis, and a z coordinate axis. In combination with the mapping relationship from the world coordinate system to the screen coordinate system and in combination with fig. 5, it can be seen that, in the current state of the virtual notebook computer, after the virtual notebook computer is mapped to the screen coordinate system, the mark point 501 is at the front end of the display screen, so that the user can see the mark point 501.
S404, displaying the labeling information associated with the first target mark point.
This step can be referred to the related description above and will not be described herein.
It is understood that the order of steps S403 and S404 and step S402 in this embodiment is not limited to that shown in fig. 4, and steps S403 and S404 may be performed simultaneously with step S402 in practical applications.
It can be understood that, after the stereoscopic virtual image of the virtual object is displayed, the user may adjust the displayed stereoscopic virtual image of the virtual object as needed, and accordingly, the present application may obtain an adjustment operation for adjusting the display effect of the virtual object. The adjusting operation of adjusting the display effect of the virtual object may be rotating a stereoscopic virtual image of the virtual object to change a virtual image portion of the virtual object displayed in the display area; alternatively, the stereoscopic virtual image of the virtual object may be scaled, but there may be other possibilities, which are not limited to this.
On the basis, the stereoscopic virtual image of the displayed virtual object is adjusted according to the adjusting operation. For example, in the case that the stereoscopic virtual image of the virtual object is detected to be rotated, the angle of the stereoscopic virtual image of the virtual object is adjusted accordingly to change the presentation angle of the stereoscopic virtual image in the virtual object in the display area, so that the user sees other partial images different from the virtual object before adjustment.
Correspondingly, the method and the device can re-determine at least one second target mark point which can be presented in the visual range in the adjusted three-dimensional virtual image of the virtual object, and display the labeling information associated with the second target mark point. The second target mark point belongs to at least one mark point marked in the three-dimensional virtual image.
After the adjustment operation is detected, the process of displaying the stereoscopic virtual image of the virtual object based on the adjustment operation is similar to the previous process, except that along with the adjustment operation, the virtual camera in the virtual scene where the virtual object is located is changed, so that the coordinate system of the virtual camera is also changed correspondingly, and the screen coordinate of the stereoscopic virtual image of the virtual object in the display area needs to be determined by combining the changed camera coordinate system.
Correspondingly, the process of determining the second target mark point may also be the same as the implementation manner of determining the first target mark point, and is not described herein again.
For convenience in understanding the scheme of the present application, a virtual scene in which a virtual object is located is taken as an example for explanation, and for convenience in description, an electronic device is taken as a terminal device and a stereoscopic virtual image of the virtual object is output to a display screen of the electronic device as an example for explanation.
As shown in fig. 6, which shows another flowchart of the display method of the virtual object of the present application, the method of the present embodiment is applied to an electronic device, which may be a terminal device.
The method of the embodiment may include:
s601, obtaining image data of a virtual object to be displayed.
The image data comprises a three-dimensional virtual image of the virtual object and labeling information related to each labeling point marked on the three-dimensional virtual image. At least one marking point is marked on the three-dimensional virtual image.
And S602, outputting a three-dimensional virtual image of the virtual object to a display screen based on the first camera coordinate system corresponding to the set initial space position of the virtual camera.
S603, determining at least one first target mark point which can be presented in a visual range on the three-dimensional virtual image of the virtual object by taking the initial space position of the virtual camera as a viewpoint position.
S604, displaying the labeling information associated with the first target mark point on a display screen of the electronic equipment.
The above steps S601 to S604 can refer to the related description of the previous embodiment, and are not described herein again.
In the present embodiment, the three-dimensional virtual image of the virtual object and the annotation information associated with the annotation point are output to the display screen of the electronic device as an example, and the present embodiment is also applicable to a projection area where the electronic device projects the three-dimensional virtual image of the virtual object and the like to the outside of the electronic device.
S605, an adjustment operation for adjusting the display effect of the virtual object is obtained.
The adjustment operation may be a rotation operation for rotating the displayed virtual object, or a zoom operation for zooming the virtual object.
For example, a virtual object displayed in the display screen may be rotated by a mouse or a touch, etc., to trigger the rendering of images of other angles in the three-dimensional virtual image of the virtual object in the display screen.
And S606, responding to the adjusting operation, adjusting the position of the set virtual camera in the virtual scene, and determining the adjusted space position of the virtual camera in the virtual scene.
The virtual scene is a virtual space scene where the virtual object is located, that is, a virtual space where a three-dimensional virtual image of the virtual object is located.
In an optional manner, in order to dynamically reflect a change of the three-dimensional virtual image of the virtual object presented in the process of adjusting the virtual object, a spatial position of the virtual camera in the virtual scene may also change at any time in the process of adjusting the virtual object. For example, the adjusted spatial position of the virtual camera may be continuously determined in conjunction with the adjustment operation, so as to control the display of the virtual object on the display screen and dynamically determine and display the annotation point within the visual range according to the adjusted spatial position of the virtual camera in real time.
For example, an adjustment operation for rotating the virtual object actually changes the angle of view of the virtual scene, and therefore, the position of the virtual camera is also rotated correspondingly with the adjustment operation. For example, during the rotation operation, the spatial displacement of the virtual camera in the virtual scene is determined according to the movement coordinates of the rotation operation moving on the display area, so as to obtain the spatial position of the virtual camera in the virtual space, i.e., the adjusted spatial position.
For the zooming operation of zooming the virtual object, the spatial distance of the virtual camera relative to the virtual object can be changed according to the zooming scale of the zooming operation, and the adjusted spatial position of the virtual camera can be determined.
S607, mapping the three-dimensional virtual image of the virtual object to the display screen of the electronic device according to the second camera coordinate system constructed by the adjusted spatial position of the virtual camera, so as to adjust the displayed three-dimensional virtual image of the virtual object.
It can be understood that, under the condition that the second camera coordinate system is determined, the method and the device can map the coordinates of each pixel point in the three-dimensional virtual image of the virtual object in the world coordinate system to the coordinates in the second camera coordinate system, further map the coordinates to the screen coordinates in the screen coordinate system of the display screen, and finally map the three-dimensional virtual image of the virtual object to the display screen by combining the screen coordinates of each pixel point in the three-dimensional virtual image of the virtual object.
The specific implementation process of displaying the three-dimensional virtual image of the virtual object on the display screen in combination with the second camera coordinate system is similar to the implementation process of displaying the three-dimensional virtual image of the virtual object on the display screen in combination with the first camera coordinate system, but the camera coordinate system is changed, and is not described herein again.
And S608, determining at least one second target mark point in a visual range on the stereoscopic virtual image by taking the adjusted space position of the virtual camera as a viewpoint position.
The process of determining the second target mark point according to the viewpoint position is the same as the implementation process of determining the first target mark point based on the viewpoint position, which may specifically refer to the related description of the foregoing embodiment, and is not described herein again.
And S609, displaying the labeling information related to the at least one second target mark point on a display screen.
The specific way of displaying the labeling information associated with the second target mark point is similar to the specific way of displaying the labeling information associated with the first target mark point, and is not repeated here.
In an optional manner, the marking information associated with the second target marking point may be displayed in a first display state, and other marking points other than the second target marking point and their associated marking information may be displayed in a second display state. The first display state and the second display state may refer to the related description above, and are not described herein again.
It should be noted that, in the present embodiment, for convenience of understanding, the virtual scene is taken as a three-dimensional virtual scene as an example, and therefore, the above-mentioned three-dimensional virtual images are all three-dimensional virtual images of virtual objects, but it is understood that the three-dimensional virtual images of virtual objects are extended to other stereoscopic virtual images and are also applicable to the present embodiment.
For the convenience of understanding the present embodiment, an application virtual scene is taken as an example for explanation.
A scene in which a three-dimensional virtual image of a notebook computer sold in an e-commerce platform is displayed through a terminal is taken as an example for explanation.
In order to enable a user to remotely and intuitively know the specific situation of the notebook computer sold by the e-commerce platform, the e-commerce platform constructs a virtual scene, and the virtual scene may include a virtual three-dimensional virtual image of the virtual notebook computer, for example, the three-dimensional virtual image of the notebook computer is constructed by using a VR technology.
On the basis, after the terminal obtains the three-dimensional virtual image of the virtual notebook computer, the three-dimensional virtual image of the notebook computer can be output in the display screen of the terminal. From the perspective of a user, the user can see an image at a certain angle in the three-dimensional virtual image of the notebook computer through the display screen of the terminal.
As shown in fig. 2, in the three-dimensional virtual image of the notebook computer displayed on the display screen of fig. 2, the user can see the plane of the body where the screen and the keyboard of the notebook computer are located, and simultaneously, the mark points on the side of the body where the screen and the keyboard are located are also in a visible state. On the basis, the user can intuitively know the shape of the body where the screen and the keyboard of the notebook computer are located, and can directly know some related parameters or introduction information of the screen and the body of the notebook computer through the marking information related to the marking points on the screen and the body.
On the basis of fig. 2, if the user wants to see the notebook computer from other angles, the user can control the virtual notebook computer displayed in a rotating manner by means of mouse or finger touch. On the basis of the angle, the angle of the virtual notebook computer presented in the display screen can be changed. For example, after rotating the virtual notebook computer, a three-dimensional virtual image of the notebook computer displayed on the display screen can be as shown in fig. 3.
Comparing fig. 2 and fig. 3, it can be seen that the notebook computer is rotated from the angle that the screen faces the user to the angle that the back panel where the screen is located faces the user. As can be seen from fig. 3, in combination with the mapping image of the rotated three-dimensional virtual image of the notebook computer in the screen, the user can visually see the effects of the color of the screen back panel of the notebook computer, and the like. However, in the state of fig. 3, since the screen of the notebook computer is in the invisible state, the mark point marked on the screen of the notebook computer is in the invisible range, and in this case, it is possible to adopt a case where the mark information associated with the mark point of the screen is processed into the fuzzy display state.
Therefore, the user can visually check the angle of the notebook computer according to the displayed virtual three-dimensional image of the notebook computer, and can also visually see the related introduction information of different components in the notebook computer, so that the real related information such as the appearance, the performance and the like of the notebook computer can be visually known according to the virtual three-dimensional image of the notebook computer.
It can be understood that, in the embodiment of the present application, it is necessary to mark each mark point on the stereoscopic virtual image of the virtual object in advance, and create marking information for each mark point.
The specific implementation manner of labeling the mark points and creating the corresponding labeling information for the stereoscopic virtual image of the virtual object can be realized in various ways, which is not limited in the present application.
For ease of understanding, in one possible implementation, the processes of labeling the mark points and creating the labeling information in the stereoscopic virtual image of the virtual object are briefly described. Fig. 7 is a schematic flow chart illustrating a process of creating a mark point and annotation information for a stereoscopic virtual image of a virtual object.
The flow shown in fig. 7 may include:
s701, displaying a stereoscopic virtual image of the virtual object to be marked.
S702, obtaining the screen coordinate corresponding to the operation point of the labeling operation of the virtual object.
And S703, converting the screen coordinate into a space coordinate in a virtual scene where the virtual object is located.
The process of converting the screen coordinates to spatial coordinates in the world coordinate system in the virtual scene is the inverse process of converting the coordinates in the world coordinate system to screen coordinates.
If the current spatial position of the virtual camera in the virtual scene in the virtual object is combined, a camera coordinate system is determined, then the screen coordinates are sequentially converted into the coordinates of the standardized equipment, the coordinates of the standardized equipment are converted into the coordinates in the camera coordinate system, and finally the coordinates in the camera coordinate system are converted into the spatial coordinates in the world coordinate system in the virtual scene, and the specific conversion process is not repeated.
S704, determining a target point to be marked in the stereoscopic virtual image of the virtual object by combining the space coordinate and the current camera coordinate position of the virtual camera in the virtual scene, and generating a mark point at the target point in the stereoscopic virtual image.
For example, a ray passing through the spatial coordinates is created starting from the current camera coordinate position of the virtual camera, and the first intersection point of the ray and the virtual object is the target point to be marked.
The generation of a marker point at the target point may be marking the target point as a marker point and storing the marker point, and the specific process is not limited.
S705, creating and storing the marking information associated with the marking point.
For example, after the mark point is generated, a mark information input box can pop up at the mark point, and the user can input the mark information associated with the mark point in the input box.
The application also provides a display device of the virtual object, which corresponds to the display method of the virtual object in the application.
As shown in fig. 8, which shows a schematic structural diagram of a display apparatus of a virtual object according to the present application, the apparatus of the present embodiment may include:
a data obtaining unit 801, configured to obtain image data of a virtual object to be displayed, where the image data includes a stereoscopic virtual image of the virtual object and labeling information associated with each marker point marked on the stereoscopic virtual image, and at least one marker point is marked on the stereoscopic virtual image;
an image display unit 802 for displaying a stereoscopic virtual image of the virtual object;
a target determining unit 803, configured to determine at least one first target marker that may be present in a visible range in the stereoscopic virtual image, where the first target marker belongs to the at least one marker;
and an annotation display unit 804, configured to display the annotation information associated with the first target marker.
In one possible implementation, the annotation display unit includes:
the first effect display unit is used for displaying the marking information associated with the first target marking point in a first display state;
the device also includes:
and the second effect display unit is used for displaying other mark points except the first target mark point and the associated annotation information thereof in the stereoscopic virtual image in a second display state while the first effect display unit displays the associated annotation information of the first target mark point, wherein the display effect of the second display state is different from the display effect of the first display state.
In yet another possible implementation manner, the image display unit includes:
and the image display subunit is used for displaying the stereoscopic virtual image of the virtual object based on the first camera coordinate system corresponding to the set initial spatial position of the virtual camera.
In an alternative, the goal determination unit comprises:
and the target determining subunit is used for determining at least one first target mark point which can be presented in a visual range on the stereoscopic virtual image by taking the initial spatial position of the virtual camera as a visual point position.
In yet another possible implementation manner, the image display subunit includes:
the coordinate mapping subunit is used for determining screen coordinates in a screen coordinate system of the display area mapped by each pixel point in the three-dimensional virtual image of the virtual object according to a first camera coordinate system established by the initial space position of the virtual camera;
and the image mapping subunit is used for mapping the stereoscopic virtual image of the virtual object to the display area by combining the screen coordinates of each pixel point in the stereoscopic virtual image of the virtual object.
In another possible implementation manner, the method further includes:
an operation obtaining unit configured to obtain an adjustment operation of adjusting a display effect of the virtual object after the stereoscopic virtual image of the virtual object is displayed by the image display unit;
an image adjusting unit configured to adjust a stereoscopic virtual image of the virtual object displayed in response to the adjustment operation;
an adjusting target unit, configured to determine at least one second target marker that may be present in a visible range in the adjusted stereoscopic virtual image of the virtual object, where the second target marker belongs to the at least one marker;
and the annotation display adjusting unit is used for displaying the annotation information associated with the second target mark point.
In a possible implementation manner, the image adjusting unit includes:
a position determining subunit, configured to adjust, in response to the adjustment operation, a position of a set virtual camera in a virtual scene, and determine an adjusted spatial position of the virtual camera in the virtual scene, where the virtual scene is a virtual space to which the stereoscopic virtual image belongs;
and the image adjusting subunit is configured to map the stereoscopic virtual image of the virtual object to a display area according to a second camera coordinate system constructed according to the adjusted spatial position of the virtual camera, so as to adjust the displayed stereoscopic virtual image of the virtual object.
In another possible implementation manner, the target unit is specifically configured to determine at least one second target mark point in a visible range on the stereoscopic virtual image by using the adjusted spatial position of the virtual camera as a viewpoint position.
In yet another aspect, the present application further provides an electronic device, as shown in fig. 9, which shows a schematic structural diagram of the electronic device, and the electronic device may be any type of electronic device, and the electronic device at least includes a memory 901 and a processor 902;
wherein the processor 901 is configured to execute the display method of the virtual object as in any one of the above embodiments.
The memory 902 is used to store programs needed for the processor to perform operations.
It is to be understood that the electronic device may further include a display unit 903 and an input unit 904.
Of course, the electronic device may have more or less components than those shown in fig. 9, which is not limited thereto.
In another aspect, the present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the display method of a virtual object according to any one of the above embodiments.
The present application also proposes a computer program comprising computer instructions stored in a computer readable storage medium. A computer program for performing the method of displaying a virtual object as in any one of the above embodiments when run on an electronic device.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. Meanwhile, the features described in the embodiments of the present specification may be replaced or combined with each other, so that those skilled in the art can implement or use the present application. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of displaying a virtual object, comprising:
acquiring image data of a virtual object to be displayed, wherein the image data comprises a three-dimensional virtual image of the virtual object and marking information related to each marking point marked on the three-dimensional virtual image, and at least one marking point is marked on the three-dimensional virtual image;
displaying a stereoscopic virtual image of the virtual object;
determining at least one first target mark point which can be presented in a visual range in the stereoscopic virtual image, wherein the first target mark point belongs to the at least one mark point;
and displaying the labeling information associated with the first target labeling point.
2. The method of claim 1, wherein the displaying of the annotation information associated with the first target marker point comprises:
displaying the labeling information associated with the first target mark point in a first display state;
when the labeling information associated with the first target mark point is displayed, the method further comprises the following steps:
and displaying other mark points except the first target mark point in the stereoscopic virtual image and associated mark information thereof in a second display state, wherein the display effect of the second display state is different from that of the first display state.
3. The method of claim 1, the displaying a stereoscopic virtual image of the virtual object, comprising:
and displaying a stereoscopic virtual image of the virtual object based on a first camera coordinate system corresponding to the set initial spatial position of the virtual camera.
4. The method of claim 3, the determining at least one first target marker point in the stereoscopic virtual image that may be presented within a viewable range, comprising:
and determining at least one first target mark point which can be presented in a visual range on the stereoscopic virtual image by taking the initial space position of the virtual camera as a visual point position.
5. The method of any of claims 1 to 4, further comprising, after said displaying the stereoscopic virtual image of the virtual object:
obtaining an adjustment operation for adjusting the display effect of the virtual object;
adjusting the displayed stereoscopic virtual image of the virtual object in response to the adjustment operation;
determining at least one second target marking point which can be presented in a visual range in the adjusted stereoscopic virtual image of the virtual object, wherein the second target marking point belongs to the at least one marking point;
and displaying the labeling information associated with the second target mark point.
6. The method of claim 5, said adjusting the displayed stereoscopic virtual image of the virtual object in response to the adjustment operation, comprising:
responding to the adjustment operation, adjusting the position of a set virtual camera in a virtual scene, and determining the adjusted space position of the virtual camera in the virtual scene, wherein the virtual scene is a virtual space to which the stereoscopic virtual image belongs;
and mapping the stereoscopic virtual image of the virtual object to a display area according to a second camera coordinate system constructed by the adjusted spatial position of the virtual camera so as to adjust the displayed stereoscopic virtual image of the virtual object.
7. The method of claim 6, wherein the determining at least one second target marker point that can be presented within a visual range in the adjusted stereoscopic virtual image of the virtual object comprises:
and determining at least one second target mark point in a visual range on the three-dimensional virtual image by taking the adjusted space position of the virtual camera as a visual point position.
8. The method of claim 3, the displaying the stereoscopic virtual image of the virtual object based on the first camera coordinate system corresponding to the set initial spatial position of the virtual camera, comprising:
determining screen coordinates of each pixel point in the three-dimensional virtual image of the virtual object mapped to a screen coordinate system of a display area according to a first camera coordinate system established by the initial spatial position of the virtual camera;
and mapping the stereoscopic virtual image of the virtual object to the display area by combining the screen coordinates of each pixel point in the stereoscopic virtual image of the virtual object.
9. A display device of a virtual object, comprising:
the data acquisition unit is used for acquiring image data of a virtual object to be displayed, wherein the image data comprises a three-dimensional virtual image of the virtual object and label information related to each label point marked on the three-dimensional virtual image, and at least one label point is marked on the three-dimensional virtual image;
an image display unit for displaying a stereoscopic virtual image of the virtual object;
the target determining unit is used for determining at least one first target marking point which can be presented in a visual range in the stereoscopic virtual image, and the first target marking point belongs to the at least one marking point;
and the annotation display unit is used for displaying the annotation information associated with the first target mark point.
10. An electronic device comprising at least a memory and a processor;
wherein the processor is configured to execute the display method of the virtual object according to any one of claims 1 to 8;
the memory is used for storing programs needed by the processor to execute the operation.
CN202210118043.4A 2022-02-08 2022-02-08 Virtual object display method and device and electronic equipment Pending CN114445525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210118043.4A CN114445525A (en) 2022-02-08 2022-02-08 Virtual object display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210118043.4A CN114445525A (en) 2022-02-08 2022-02-08 Virtual object display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114445525A true CN114445525A (en) 2022-05-06

Family

ID=81372049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210118043.4A Pending CN114445525A (en) 2022-02-08 2022-02-08 Virtual object display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114445525A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016721A (en) * 2022-05-09 2022-09-06 北京城市网邻信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN115033133A (en) * 2022-05-13 2022-09-09 北京五八信息技术有限公司 Progressive information display method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016721A (en) * 2022-05-09 2022-09-06 北京城市网邻信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN115033133A (en) * 2022-05-13 2022-09-09 北京五八信息技术有限公司 Progressive information display method and device, electronic equipment and storage medium
CN115033133B (en) * 2022-05-13 2023-03-17 北京五八信息技术有限公司 Progressive information display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Fuhrmann et al. Occlusion in collaborative augmented environments
KR102495447B1 (en) Providing a tele-immersive experience using a mirror metaphor
CN108762482B (en) Data interaction method and system between large screen and augmented reality glasses
CN110163942B (en) Image data processing method and device
KR100953931B1 (en) System for constructing mixed reality and Method thereof
CN111164971B (en) Parallax viewer system for 3D content
US20070291035A1 (en) Horizontal Perspective Representation
CN114445525A (en) Virtual object display method and device and electronic equipment
JP2008521110A (en) Personal device with image capture function for augmented reality resources application and method thereof
Tatzgern et al. Exploring real world points of interest: Design and evaluation of object-centric exploration techniques for augmented reality
CN111950521A (en) Augmented reality interaction method and device, electronic equipment and storage medium
JP2005135355A (en) Data authoring processing apparatus
CN108133454B (en) Space geometric model image switching method, device and system and interaction equipment
KR100971667B1 (en) Apparatus and method for providing realistic contents through augmented book
Nishino et al. 3d object modeling using spatial and pictographic gestures
Park et al. DesignAR: Portable projection-based AR system specialized in interior design
EP4325344A1 (en) Multi-terminal collaborative display update method and apparatus
US11341716B1 (en) Augmented-reality system and method
CN113849112A (en) Augmented reality interaction method and device suitable for power grid regulation and control and storage medium
KR101047615B1 (en) Augmented Reality Matching System and Method Using Resolution Difference
CN114442888B (en) Object determination method and device and electronic equipment
CN112667137B (en) Switching display method and device for house type graph and house three-dimensional model
US20220206669A1 (en) Information processing apparatus, information processing method, and program
JP5520772B2 (en) Stereoscopic image display system and display method
KR102419290B1 (en) Method and Apparatus for synthesizing 3-dimensional virtual object to video data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination