WO2024088144A1 - 增强现实画面的处理方法、装置、电子设备及存储介质 - Google Patents

增强现实画面的处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024088144A1
WO2024088144A1 PCT/CN2023/125332 CN2023125332W WO2024088144A1 WO 2024088144 A1 WO2024088144 A1 WO 2024088144A1 CN 2023125332 W CN2023125332 W CN 2023125332W WO 2024088144 A1 WO2024088144 A1 WO 2024088144A1
Authority
WO
WIPO (PCT)
Prior art keywords
augmented reality
cross
dimensional model
section
linear object
Prior art date
Application number
PCT/CN2023/125332
Other languages
English (en)
French (fr)
Inventor
周栩彬
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024088144A1 publication Critical patent/WO2024088144A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the embodiments of the present disclosure relate to computer technology, for example, to a method, device, electronic device and storage medium for processing augmented reality images.
  • Augmented Reality (AR) technology is a technology that obtains real-world images by shooting the real world in real time and overlays virtual information on the real-world images.
  • the present invention provides a method, device, electronic device and storage medium for processing an augmented reality image.
  • an embodiment of the present disclosure provides a method for processing an augmented reality picture, the method comprising:
  • an adjusted three-dimensional model of the linear object is displayed in the augmented reality screen, wherein the display adjustment operation includes at least one of a display position adjustment operation, a display size adjustment operation, or a display angle adjustment operation.
  • the embodiments of the present disclosure further provide a device for processing an augmented reality image, the device comprising:
  • a request module configured to display a three-dimensional model of a linear object corresponding to the augmented reality picture in the augmented reality picture in response to a rendering trigger request for the augmented reality picture;
  • the display module is configured to display the adjusted three-dimensional linear object in the augmented reality screen in response to a display adjustment operation on the three-dimensional model, wherein the display adjustment operation includes at least one of a display position adjustment operation, a display size adjustment operation or a display angle adjustment operation.
  • an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more A processor implements a method for processing an augmented reality image as described in any of the embodiments of the present disclosure.
  • the embodiments of the present disclosure further provide a storage medium comprising computer executable instructions, which, when executed by a computer processor, are used to execute the method for processing an augmented reality image as described in any one of the embodiments of the present disclosure.
  • FIG1 is a schematic flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG2 is a schematic flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG3 is a flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG4 is a schematic flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG5 is a schematic flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG6A is an example diagram of rendering a three-dimensional model of a linear object in a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG6B is an example diagram of rendering of a three-dimensional model of a linear object in another method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG. 7 is a flowchart illustrating a method for acquiring key points of a linear object in a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG8 is an example diagram of a triangular primitive on a cross section of a three-dimensional model of a linear object in a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG9 is an example diagram of triangular primitives on a connecting surface of a three-dimensional model of a linear object in a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG10 is an example diagram of intermittent rendering of a three-dimensional model of a linear object in a method for processing an augmented reality image provided by an embodiment of the present disclosure
  • FIG11 is a schematic diagram of the structure of an augmented reality image processing device provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
  • Two-dimensional lines are used to enrich and assist the augmented reality screen, that is, two-dimensional lines are drawn on the augmented reality screen.
  • this processing method often makes the lines in the augmented reality screen appear stiff and lack three-dimensionality, thus affecting the display quality of the screen.
  • the display method of two-dimensional lines is often inconsistent with the display quality of the screen.
  • the display mode is relatively fixed, which cannot meet the personalized interaction needs of users well and affects the user experience.
  • the embodiments of the present disclosure provide a method, device, electronic device and storage medium for processing an augmented reality image.
  • a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information.
  • the user can autonomously choose whether to provide personal information to software or hardware such as an electronic device, application, server, or storage medium that performs the operation of the disclosed embodiment according to the prompt message.
  • the prompt information in response to receiving the active request from the user, may be sent to the user in the form of a pop-up window, in which the prompt information may be presented in the form of text.
  • the pop-up window may also carry a selection control for the user to choose "agree” or “disagree” to provide personal information to the electronic device.
  • FIG1 is a flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure can render an augmented reality image.
  • the method can be executed by a processing device for augmented reality images.
  • the device can be implemented in the form of software and/or hardware, for example, by an electronic device, which can be a mobile terminal, a personal computer (PC) or a server, etc.
  • the method of this embodiment may include:
  • the augmented reality picture is generally a picture in which the real environment and the virtual object exist in the same space at the same time, that is, a picture generated after the virtual object is applied to the real environment.
  • the virtual object can be a three-dimensional model of a linear object.
  • the linear object can be understood as an object with linear features.
  • the shape of the linear object can include at least one of a straight line, a curve and a broken line.
  • the expression style of the shape of the linear object can be a solid line or a dotted line.
  • the linear object may be a linear special effect object of an augmented reality screen.
  • it may be a marking line for two image objects of at least one preset marker object, or a trajectory line for marking the motion trajectory of a preset marker object, or a preset linear special effect object, etc.
  • the linear special effect object may have a variety of expressions, such as a whip or a stick, etc.
  • a linear object can be determined based on the running trajectory of the preset marker; when two preset markers are detected in the augmented reality screen, the line between the two preset markers can be used as a linear object; or, the rendering trigger request can be parsed to obtain object shape information (such as a straight line shape) for describing the linear object corresponding to the augmented reality screen, and then the linear object corresponding to the object shape information described in the rendering trigger request can be determined based on the correspondence between the preset linear object and the object shape information.
  • object shape information such as a straight line shape
  • the three-dimensional model of the linear object can be understood as a linear object used to represent the linear object.
  • a rendering model of a three-dimensional feature for example, a straight line three-dimensional model, a curve three-dimensional model, or a broken line three-dimensional model.
  • the type of the three-dimensional model of a linear object can be a mesh type or a point cloud type.
  • the three-dimensional model of a linear object can be a three-dimensional mesh model of the linear object, or it can be a three-dimensional point cloud model of the linear object.
  • the rendering trigger request can be understood as a trigger request for displaying the three-dimensional model of the linear object corresponding to the augmented reality screen in the augmented reality screen.
  • the number of three-dimensional models of the linear object corresponding to the augmented reality screen can be one, two, or more than two.
  • the three-dimensional model of the linear object corresponding to the augmented reality screen can be obtained from a database for storing three-dimensional models based on the rendering trigger request. That is, the three-dimensional model of the linear object corresponding to the augmented reality screen is read from the database for storing three-dimensional models and loaded into the memory. After the loading is completed, the three-dimensional model of the linear object can be rendered. Thereby, the three-dimensional model of the linear object is displayed in the augmented reality screen, and the picture quality of the augmented reality screen is improved.
  • the database for storing three-dimensional models can be a local database or a remote database.
  • a three-dimensional model of a linear object corresponding to the augmented reality screen can be constructed based on the rendering trigger request.
  • the three-dimensional model of the linear object can be rendered.
  • the three-dimensional model of the linear object is displayed in the augmented reality screen.
  • constructing a three-dimensional model of a linear object corresponding to the augmented reality screen based on a rendering trigger request includes: parsing the rendering trigger request to obtain model feature data of the three-dimensional model of the linear object displayed in the augmented reality screen. Then, a three-dimensional model of the linear object corresponding to the augmented reality screen can be constructed based on the model feature data.
  • the method for obtaining a rendering trigger request may be: receiving a trigger operation for triggering the rendering of an augmented reality screen, and generating a rendering trigger request for the augmented reality screen based on the trigger operation.
  • the trigger operation for triggering the rendering of an augmented reality screen may be a trigger operation generated by a trigger control acting on a trigger for triggering the rendering of an augmented reality screen; or, a trigger operation generated based on a collected voice instruction for rendering an augmented reality screen; or, a trigger operation generated based on collected image information for rendering an augmented reality screen.
  • the trigger control may include a physical trigger control and/or a virtual trigger control.
  • a physical trigger control may be a physical control, such as a push button, a slide button, etc.
  • a virtual trigger control may be a The control for triggering the rendering of the augmented reality screen is shown. It should also be noted that the icon style, display effect and display position of the virtual trigger control can be set according to actual needs and are not limited here. For example, receiving a trigger operation for triggering the rendering of the augmented reality screen may be receiving a trigger operation (such as clicking or pressing a button, etc.) acting on a trigger control for triggering the rendering of the augmented reality screen.
  • rendering the three-dimensional model of the linear object may include: parsing the rendering trigger request to obtain rendering parameters for the three-dimensional model of the linear object. Then, rendering may be performed on the three-dimensional model of the linear object based on the rendering parameters.
  • the rendering parameters may include materials, lights, and textures.
  • rendering the three-dimensional model of the linear object may include: after receiving a rendering trigger request for an augmented reality screen, based on the rendering trigger request, reading rendering parameters corresponding to the three-dimensional model of the linear object from pre-configured rendering parameter information, and rendering the three-dimensional model of the linear object based on the rendering trigger request.
  • S120 In response to a display adjustment operation on the linear object, display an adjusted three-dimensional model of the linear object in the augmented reality screen.
  • the display adjustment operation can be understood as an operation for adjusting the linear object displayed in the augmented reality screen.
  • the display adjustment operation can be a touch operation acting on the linear object displayed in the augmented reality screen, such as a single-click operation, a sliding operation, or a double-click operation; or, a click operation acting on a control used to adjust the linear object displayed in the augmented reality screen; or, a pressing operation acting on a physical button used to adjust the linear object displayed in the augmented reality screen, etc.
  • the display adjustment operation may include a display position adjustment operation, a display size adjustment operation, and/or a display angle adjustment operation.
  • the display adjustment operation not only can the display of the linear object be adjusted to better meet the personalized needs of the user, but the three-dimensional model of the linear object can also be displayed in an all-round and multi-angle manner, enriching the augmented reality screen.
  • the display position adjustment operation may be understood as an operation for adjusting the position of a linear object displayed in an augmented reality screen.
  • the linear object currently displayed in the augmented reality screen is adjusted from the current display position to the target display position in the augmented reality screen.
  • the target display position may be understood as the display position of the linear object displayed in the augmented reality screen after moving in a certain direction with the current position of the linear object as the reference position.
  • a certain direction may include moving to the left, moving to the right, moving up, and moving down, etc.
  • the linear object displayed in the lower left corner of the augmented reality screen may be adjusted from the lower left corner of the augmented reality screen to the upper right corner of the augmented reality screen. The upper left corner of the real screen.
  • the display size adjustment operation can be understood as an operation for adjusting the size of a linear object displayed in an augmented reality screen, that is, adjusting the linear object of the current display size displayed in the augmented reality screen from the current display size to the target display size.
  • the display target size can be understood as the size obtained by enlarging or reducing the linear object displayed in the augmented reality screen.
  • the display angle adjustment operation can be understood as an operation for adjusting the angle of a linear object displayed in the augmented reality screen. In other words, the linear object displayed at the current display angle in the augmented reality screen is adjusted from the current display angle to the target display angle, so that the linear object is displayed in the augmented reality screen at the target display angle.
  • the target display angle can be understood as the angle obtained by adjusting the display angle of the linear object displayed at the current display angle in the augmented reality screen.
  • display position operation information of the display position adjustment operation can be obtained. Based on the display position operation information, a target display position for displaying the three-dimensional model of the linear object in an augmented reality screen can be determined. Based on the display position, the three-dimensional model of the linear object can be rendered. Thus, the three-dimensional model of the linear object is displayed at the target display position of the augmented reality screen.
  • the display angle operation information of the display angle adjustment operation can be obtained.
  • the rotation angle and the rotation axis for performing the display angle operation on the three-dimensional model of the linear object can be determined based on the display angle operation information.
  • the target display angle of the three-dimensional model of the linear object can be determined based on the rotation axis and the rotation angle.
  • the three-dimensional model of the linear object can be displayed in the augmented reality screen at the target display angle.
  • display size operation information of the display size adjustment operation can be obtained.
  • the three-dimensional model of the linear object can be reconstructed based on the display size operation information.
  • a reconstructed three-dimensional model of the linear object can be obtained.
  • the reconstructed three-dimensional model of the linear object can be rendered into an augmented reality screen.
  • the reconstructed three-dimensional model of the linear object is displayed in the augmented reality screen.
  • the embodiments of the present disclosure there are multiple ways to reconstruct the three-dimensional model of the linear object.
  • it may include: determining the target display size for displaying the three-dimensional model of the linear object in the augmented reality screen based on the display size operation information; and reconstructing the three-dimensional model of the linear object based on the target display size.
  • it may include: determining the current display size of the three-dimensional model of the linear object; and obtaining a size ratio relative to the current display size based on the display size operation information. Then, based on the size ratio and the current display size, an augmented reality The target display size for displaying the three-dimensional model of the linear object in the screen is used to display the three-dimensional model of the linear object in the screen, so that the three-dimensional model of the linear object of the target display size is displayed in the augmented reality screen.
  • a three-dimensional model of a linear object corresponding to the augmented reality screen is displayed in the augmented reality screen, thereby improving the stereoscopic and realistic feeling of the linear object in the augmented reality screen.
  • the adjusted three-dimensional model of the linear object is displayed in the augmented reality screen, wherein the display adjustment operation includes a display position adjustment operation, a display size adjustment operation, and/or a display angle adjustment operation.
  • FIG2 is a flow chart of a method for processing an augmented reality screen provided by an embodiment of the present disclosure.
  • this embodiment exemplarily describes how to display a three-dimensional model of a linear object in an augmented reality screen.
  • the three-dimensional model of the linear object corresponding to the augmented reality screen is displayed in the augmented reality screen, including: obtaining multiple key points of the linear object corresponding to the augmented reality screen; rendering the three-dimensional model of the linear object based on the multiple key points, and displaying the three-dimensional model in the augmented reality screen.
  • the technical features that are the same or similar to those in the above embodiments are not repeated here.
  • the method of this embodiment may include:
  • S210 In response to a rendering trigger request for an augmented reality picture, obtain a plurality of key points of a linear object corresponding to the augmented reality picture.
  • the key points of the linear object can be understood as the characteristic points of the linear object.
  • Multiple key points of the linear object can depict the shape of the linear object, such as a straight line, a curve or a broken line.
  • a coordinate system can be pre-constructed, so that the coordinate points under the coordinate system can be used to represent the position information of each key point of the linear object.
  • a linear object corresponding to the augmented reality picture may be determined based on the rendering trigger request, and then a plurality of key points of the linear object may be obtained.
  • obtaining multiple key points of the linear object corresponding to the augmented reality screen may include: generating multiple key points of the linear object corresponding to the augmented reality screen based on a preset algorithm.
  • the preset algorithm may be a preset algorithm for generating key points. Law.
  • generating multiple key points of the linear object corresponding to the augmented reality screen based on a preset algorithm may include: randomly generating multiple key points of the linear object corresponding to the augmented reality screen based on a preset algorithm; or, after receiving a rendering trigger request for the augmented reality screen, parsing the rendering trigger request.
  • a preset drawing frame rate of the key points of the linear object corresponding to the augmented reality screen may be obtained. Then, based on the preset drawing frame rate and the preset algorithm, multiple key points of the linear object corresponding to the augmented reality screen are generated.
  • obtaining multiple key points of the linear object corresponding to the augmented reality screen may include: determining an associated object of the linear object to be rendered in the augmented reality screen, and determining multiple key points of the linear object corresponding to the augmented reality screen based on the motion trajectory of the associated object.
  • the linear object to be rendered can be understood as a linear object that needs to be rendered in the augmented reality screen.
  • the associated object can be understood as an object in the augmented reality screen that has an associated relationship with the linear object to be rendered.
  • the associated object that has an associated relationship with the linear object to be rendered may be a paper airplane displayed in the augmented reality screen.
  • the linear object to be rendered may be the flight trajectory of the paper airplane displayed in the augmented reality screen.
  • an associated object in the augmented reality screen that has an associated relationship with the linear object to be rendered is determined.
  • a motion trajectory of the associated object can be obtained.
  • a feature extraction process can be performed on the motion trajectory.
  • multiple feature points of the motion trajectory can be obtained, and the extracted multiple feature points are used as multiple key points of the linear object corresponding to the augmented reality screen.
  • the associated object of the linear object to be rendered may be a flying aircraft displayed in the augmented reality screen.
  • the linear object to be rendered may be a motion track of one or more positions on the flying aircraft in the augmented reality screen.
  • the multiple key points of the linear object to be rendered may be understood as feature points of the motion track of a certain position on the flying aircraft in the augmented reality screen.
  • obtaining multiple key points of the linear object corresponding to the augmented reality screen includes: determining the shape of the linear object corresponding to the augmented reality screen. Then, based on the shape of the linear object, multiple key points of the linear object can be obtained. For example, obtaining multiple key points of the linear object based on the shape of the linear object can include: based on the shape of the linear object, obtaining key points corresponding to the shape in a database for storing key points of linear objects; or generating key points of the linear object based on the shape of the linear object.
  • S220 Render a three-dimensional model of the linear object based on the multiple key points, and display the three-dimensional model in the augmented reality screen.
  • a three-dimensional model of the linear object corresponding to the plurality of key points may be obtained, and the three-dimensional model may be rendered into an augmented reality screen, so that the three-dimensional model is displayed in the augmented reality screen.
  • a fitting process can be performed on multiple key points to obtain a fitting result. Then, a model can be reconstructed based on the fitting result. Thus, a three-dimensional model of a linear object corresponding to the multiple key points can be obtained.
  • the advantage of this is that the three-dimensional model of the linear object can be drawn according to personalized needs.
  • a three-dimensional model matching multiple key points can be matched from a database for storing three-dimensional models, and the matched three-dimensional model can be used as a three-dimensional model of a linear object corresponding to the multiple key points.
  • the advantage of doing so is that the three-dimensional model of the linear object can be obtained more quickly and effectively, thereby improving the response speed of the rendering trigger request of the augmented reality screen.
  • the disclosed embodiment acquires a plurality of key points of a linear object corresponding to the augmented reality screen; renders a three-dimensional model of the linear object based on the plurality of key points, and displays the three-dimensional model in the augmented reality screen, thereby enabling targeted acquisition of the three-dimensional model of the linear object, thereby enriching the augmented reality screen.
  • FIG3 is a flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure.
  • this embodiment exemplarily describes how to render a three-dimensional model of a linear object based on multiple key points.
  • the three-dimensional model of the linear object rendered based on multiple key points includes: for each key point, a circle is drawn with the key point as the center, multiple vertices of the three-dimensional model of the linear object are determined based on the points on the circle, and the three-dimensional model of the linear object is rendered based on the multiple vertices.
  • the technical features that are the same or similar to those in the above embodiments are not repeated here.
  • the method of this embodiment may include:
  • S310 In response to a rendering trigger request for an augmented reality picture, obtain a plurality of key points of a linear object corresponding to the augmented reality picture.
  • drawing a circle with the key point as the center may include: obtaining a preset circle drawing radius corresponding to the key point. Taking the key point as the center of the circle. Drawing a circle based on the preset circle drawing radius corresponding to the key point and the center of the circle.
  • the preset circle drawing radius may be understood as a circle drawing radius preset for each key point. It should be noted that the preset circle drawing radius corresponding to each key point may be the same or different.
  • obtaining the preset circle drawing radius corresponding to the key point may include: parsing the rendering trigger request, so as to obtain the preset circle drawing radius corresponding to each key point contained in the rendering trigger request; or obtaining circle drawing radius configuration information configured for the multiple key points, wherein the preset circle drawing radius for each key point is configured in the circle drawing radius configuration information; matching the preset circle drawing radius corresponding to the key point with the circle drawing radius configuration information.
  • the center of the circle is used as any diameter of the circle, and the diameter is rotated at a preset rotation angle (such as 5 degrees, 10 degrees or 15 degrees, etc.).
  • a preset rotation angle such as 5 degrees, 10 degrees or 15 degrees, etc.
  • the intersection points of the diameter and the circle are determined, and the intersection points are used as multiple vertices of the three-dimensional model of the linear object.
  • any point is selected on the circle, the selected point is used as a fixed point, and multiple straight lines are drawn from the fixed point, and the intersections of each straight line and the circle are used as multiple vertices of the three-dimensional model of the linear object.
  • rendering the three-dimensional model of the linear object based on the plurality of vertices and displaying the three-dimensional model in the augmented reality screen may include: performing curve fitting processing on the plurality of vertices, and then fitting each of the vertices into a curve.
  • a plurality of curves may be obtained.
  • each curve may be constructed into a surface based on a preset surface creation method.
  • a plurality of surfaces may be obtained.
  • a three-dimensional model of the linear object may be constructed based on each surface.
  • the three-dimensional model may be rendered based on the rendering information of the three-dimensional model (such as texture, material, mapping, lighting, and model skeleton movement, etc.). After the rendering is completed, the rendered three-dimensional model is displayed in the augmented reality screen.
  • the rendering information of the three-dimensional model such as texture, material, mapping, lighting, and model skeleton movement, etc.
  • a circle is drawn with the key point as the center, multiple vertices of the three-dimensional model of the linear object are determined based on the points located on the circle, and the three-dimensional model of the linear object is rendered based on the multiple vertices, thereby realizing dynamic construction of the three-dimensional model of the linear object.
  • FIG4 is a schematic flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure.
  • this embodiment exemplarily describes how to render a three-dimensional model of a linear object based on multiple vertices.
  • the rendering of the three-dimensional model of the linear object based on multiple vertices includes: taking the circle as a cross-section of the three-dimensional model, and for each cross-section, determining a rotation matrix corresponding to the cross-section based on cross-sections adjacent to the cross-section; determining the spatial coordinates of the vertices corresponding to the cross-section based on the rotation matrix, and rendering the three-dimensional model of the linear object based on the spatial coordinates of the multiple vertices.
  • this embodiment exemplarily describes how to render a three-dimensional model of a linear object based on multiple vertices.
  • the rendering of the three-dimensional model of the linear object based on multiple vertices includes: taking the circle as a cross-section of the three-dimensional
  • the method of this embodiment may include:
  • S410 In response to a rendering trigger request for an augmented reality picture, obtain a plurality of key points of a linear object corresponding to the augmented reality picture.
  • determining the rotation matrix corresponding to the cross-section based on the cross-section adjacent to the cross-section includes: taking the vector between the center of the cross-section and the center of the cross-section adjacent to the cross-section as a reference vector, and calculating the rotation matrix corresponding to the cross-section based on the horizontal direction vector and the reference vector.
  • the reference vector can be understood as a directed line segment between the center of the cross section and the center of the cross section adjacent to the cross section.
  • the directed line segment between the center of the cross section and the center of the cross section adjacent to the cross section can be a directed line segment from the center of the cross section to the center of the cross section adjacent to the cross section, or can be a directed line segment from the center of the cross section adjacent to the cross section to the center of the cross section.
  • the horizontal direction vector can be understood as any vector parallel to the X-axis in a three-dimensional coordinate system.
  • the horizontal direction vector can be understood as any vector perpendicular to the YZ plane in a three-dimensional coordinate system.
  • the YZ plane is a plane formed by the Y axis and the Z axis.
  • calculating the rotation matrix corresponding to the cross section according to the horizontal direction vector and the reference vector may include: passing the horizontal direction vector and the reference vector as actual parameters to the input parameters of a rotation matrix method predefined for calculating the rotation matrix corresponding to the cross section. After the parameter transfer is completed, the rotation matrix method is executed. Then, the rotation matrix corresponding to the cross section can be calculated.
  • the horizontal direction vector and the reference vector are substituted into the dot product formula to calculate and obtain the rotation angle between the horizontal direction vector and the reference vector.
  • the cross section and The rotation axis of the cross section adjacent to the cross section are substituted into the dot product formula to calculate and obtain the rotation angle between the horizontal direction vector and the reference vector.
  • the cross section and The rotation axis of the cross section adjacent to the cross section are substituted into the dot product formula to calculate and obtain the rotation angle between the horizontal direction vector and the reference vector.
  • the cross section and The rotation axis of the cross section adjacent to the cross section are substituted into the dot product formula to calculate and obtain the rotation angle between the horizontal direction vector and the reference vector.
  • the cross section and The rotation axis of the cross section adjacent to the cross section are substituted into the dot product formula to calculate and obtain the rotation angle between the horizontal direction vector and the reference vector.
  • the cross section and The rotation axis of the cross section adjacent to the cross section are substituted into the dot product formula to calculate and obtain the rotation angle
  • determining the spatial coordinates of the vertices corresponding to the cross section based on the rotation matrix may include: determining the spatial coordinates of the vertices corresponding to the cross section adjacent to the cross section; and then determining the spatial coordinates of the vertices corresponding to the cross section based on the spatial coordinates of the vertices corresponding to the cross section adjacent to the cross section and the rotation matrix.
  • the spatial coordinates of the vertices corresponding to the cross section are determined, including: according to the spatial coordinates of the vertices corresponding to the cross section adjacent to the cross section, a coordinate matrix of the vertices corresponding to the cross section adjacent to the cross section can be constructed. Then, the coordinate matrix of the vertices can be multiplied by the rotation matrix on the left to perform matrix calculation. Thus, the spatial coordinates of the vertices corresponding to the cross section can be determined based on the matrix calculation result.
  • S450 In response to the display adjustment operation on the linear object, display the adjusted three-dimensional model of the linear object in the augmented reality picture.
  • the circle is used as the cross-section of the three-dimensional model, and for each cross-section, the rotation matrix corresponding to the cross-section is determined based on the cross-sections adjacent to the cross-section; the spatial coordinates of the vertices corresponding to the cross-section are determined based on the rotation matrix, and the three-dimensional model of the linear object is rendered based on the spatial coordinates of the multiple vertices, thereby achieving more efficient and accurate construction of the three-dimensional model of the linear object.
  • FIG5 is a flow chart of a method for processing an augmented reality image provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment exemplarily describes how to render a three-dimensional model of a linear object based on the spatial coordinates of multiple vertices.
  • the rendering of the three-dimensional model of the linear object based on the spatial coordinates of multiple vertices includes: determining the to-be-rendered surface of the three-dimensional model based on a preset rendering method of the three-dimensional model and multiple vertices of the three-dimensional model, wherein the to-be-rendered surface includes a cross-section to be rendered and a connecting surface between two adjacent cross-sections; for each cross-section to be rendered, constructing a triangular primitive based on every three vertices located on the cross-section; for each connecting surface to be rendered, constructing a triangular primitive based on every three vertices located on different cross-sections; and rendering the three-dimensional model of the linear object based on the spatial coordinates of multiple vertices and the triangular primitives of the surface to be rendered.
  • the to-be-rendered surface includes a cross-section to be rendered and a connecting surface between two adjacent cross-sections; for each cross-section to be rendered, constructing
  • the method of this embodiment may include:
  • S510 In response to a rendering trigger request for an augmented reality picture, obtain a plurality of key points of a linear object corresponding to the augmented reality picture.
  • S520 For each of the key points, draw a circle with the key point as the center, and determine multiple vertices of the three-dimensional model of the linear object based on the points located on the circle.
  • S550 determining a to-be-rendered surface of the three-dimensional model based on a preset rendering mode of the three-dimensional model and a plurality of the vertices of the three-dimensional model, wherein the to-be-rendered surface includes a cross section to be rendered and a connecting surface between two adjacent cross sections.
  • the preset rendering mode can be understood as a rendering mode pre-set for multiple vertices of the three-dimensional model, which can include continuous rendering and/or discontinuous rendering.
  • Continuous rendering can be used to render linear objects whose shape expression style is a solid line.
  • Discontinuous rendering can be used to render linear objects whose shape expression style is a dotted line.
  • the surface to be rendered can be understood as the surface that needs to be rendered in the three-dimensional model.
  • the surface to be rendered can include at least two cross-sections to be rendered and a connecting surface between two adjacent cross-sections.
  • the three-dimensional model of the linear object can be a cylindrical model
  • the cross-section to be rendered can be the two bottom surfaces of the cylindrical model
  • the connecting surface between two adjacent cross-sections can be a side surface located between the two bottom surfaces of the cylindrical model.
  • the determining of the to-be-rendered surface of the three-dimensional model based on the preset rendering mode of the three-dimensional model and the multiple vertices of the three-dimensional model may include: when the preset rendering mode of the three-dimensional model is continuous rendering, the starting cross section and the ending cross section of the three-dimensional model may be used as the cross sections to be rendered, and the connecting surfaces between all two adjacent cross sections may be used as the connecting surfaces to be rendered.
  • the starting cross section may be understood as the first cross section to be constructed in the process of constructing the three-dimensional model.
  • the ending cross section may be understood as the last cross section to be constructed in the process of constructing the three-dimensional model.
  • the determining of the to-be-rendered surface of the three-dimensional model based on the preset rendering mode of the three-dimensional model and the plurality of vertices of the three-dimensional model comprises: when the preset rendering mode of the three-dimensional model is intermittent rendering, the starting cross section, the ending cross section, and at least two cross sections other than the starting cross section and the ending cross section of the three-dimensional model can be used as the cross sections to be rendered; and determining the connecting surface to be rendered based on the cross sections to be rendered so that the The connection surface is displayed intermittently.
  • whether it is a cross section to be rendered can be determined based on the arrangement number of the cross section.
  • intermittent rendering can be regular intermittent rendering (see Figure 6A) or irregular intermittent rendering (see Figure 6B).
  • the triangle primitive can be understood as a triangular face.
  • all vertices of the cross section to be rendered can be determined.
  • a preset model face construction algorithm (such as a region generation algorithm) can be used to connect every three vertices of all vertices on the cross section, so that a triangle primitive can be constructed.
  • S570 For each connected surface to be rendered, construct a triangular primitive based on every three vertices located on different cross sections.
  • connection surface to be rendered For example, for each connection surface to be rendered, all vertices of the connection surface to be rendered can be determined.
  • a preset model patch construction algorithm can be used to establish a connection line between every three vertices of all vertices on the connection surface, so as to construct a triangular primitive.
  • S580 Render a three-dimensional model of the linear object based on the spatial coordinates of the plurality of vertices and the triangular primitives of the surface to be rendered, and display the three-dimensional model in the augmented reality screen.
  • a three-dimensional model of the linear object can be generated based on the spatial coordinates of the plurality of vertices and the triangular primitives of the surface to be rendered. After the three-dimensional model is generated, the three-dimensional model can be rendered based on the rendering parameters of the three-dimensional model. After the rendering is completed, the rendered three-dimensional model can be displayed in the augmented reality screen.
  • the triangular primitives in the three-dimensional model can be refined based on the model refinement parameters set for the three-dimensional model.
  • the refinement parameters may include the shape of the facets, the size of a single facet, the tension between adjacent facets, etc.
  • S590 In response to the display adjustment operation on the linear object, display the adjusted three-dimensional model of the linear object in the augmented reality screen.
  • a surface to be rendered of the three-dimensional model is determined based on a preset rendering mode of the three-dimensional model and a plurality of vertices of the three-dimensional model, wherein the surface to be rendered includes a cross section to be rendered and a connecting surface between two adjacent cross sections; for each cross section to be rendered, A triangular primitive is constructed based on every three vertices located on the cross section; for each connected surface to be rendered, a triangular primitive is constructed based on every three vertices located on different cross sections; and the three-dimensional model of the linear object is rendered based on the spatial coordinates of multiple vertices and the triangular primitives of the surface to be rendered, thereby realizing rendering of the three-dimensional model of the linear object in multiple ways, thereby obtaining multiple display forms of the three-dimensional model of the linear object, enriching the content in the augmented reality screen.
  • the disclosed embodiment provides an example of a method for processing an augmented reality screen.
  • a motion trajectory of a moving object is used as a linear object, and the motion trajectory of the moving object can be a parabola.
  • the linear object can be a parabola.
  • the technical features that are the same or similar to the above-mentioned embodiments are not repeated here.
  • a plurality of key points (P0, P1, P2, ... PN in FIG. 7 ) of a parabola corresponding to the augmented reality picture may be acquired.
  • obtaining multiple key points of a parabola corresponding to an augmented reality screen may include: obtaining an initial velocity vector of a moving object may be V, an initial coordinate may be P(x0, y0), and a gravitational acceleration may be g.
  • the frame rate of drawing the trajectory points of the moving object may be f.
  • the components of the initial velocity vector V on the x-axis and the z-axis may be combined into a horizontal component Vx, and the vertical direction may be taken as a vector Vy alone.
  • the vertical height may be calculated once every 1/f of the horizontal distance based on the drawing frame rate.
  • the virtual time t increment may be 1/f.
  • the horizontal distance can be decomposed into the distance in the x-axis direction and the distance in the z-axis direction according to the initial degree V.
  • multiple trajectory points of the moving object can be obtained, that is, multiple key points of the parabola.
  • a circle is drawn with the key point (P(n) in FIG7 ) as the center, and multiple vertices of the three-dimensional model of the linear object are determined based on the points on the circle (v(n)(0), v(n)(1), ... v(n)(m) in FIG7 ); the circle is taken as the cross section of the three-dimensional model, and for each cross section, the vector between the center of the cross section and the center of the cross section adjacent to the cross section is taken as the reference vector (the vector obtained by calculating p(n-1) and p(n) in FIG7 ), and the rotation matrix corresponding to the cross section is calculated based on the horizontal direction vector and the reference vector (M(n) in FIG7 ).
  • the spatial coordinates of the vertices corresponding to the cross section are determined.
  • the surface to be rendered of the 3D model is determined; the surface to be rendered may include The cross section to be rendered and the connecting surface between two adjacent cross sections.
  • a triangle primitive is constructed based on every three vertices located on the cross section (see Figure 8).
  • a triangle primitive is constructed based on every three vertices located on different cross sections (see Figure 9).
  • the starting cross-section, the ending cross-section, and at least two cross-sections other than the starting cross-section and the ending cross-section of the three-dimensional model of the parabola are used as cross-sections to be rendered; and, based on the cross-sections to be rendered, the connecting surfaces to be rendered are determined so that the connecting surfaces are displayed intermittently (see Figure 10).
  • a three-dimensional model of a linear object is rendered based on the spatial coordinates of a plurality of vertices and triangular primitives of a surface to be rendered, and the three-dimensional model is displayed in an augmented reality screen.
  • the disclosed embodiment displays a three-dimensional model of a linear object on an augmented reality screen, which not only integrates the linear object into the augmented reality screen more realistically, but also enables effective interaction with the linear object according to the user's personalized needs, thereby improving the user experience.
  • FIG11 is a schematic diagram of the structure of an augmented reality image processing device provided by an embodiment of the present disclosure. As shown in FIG11 , the device includes: a request module 610 and a display module 620 .
  • the request module 610 is configured to respond to a rendering trigger request for an augmented reality screen, and display the three-dimensional model of the linear object corresponding to the augmented reality screen in the augmented reality screen;
  • the display module 620 is configured to respond to a display adjustment operation for the three-dimensional model, and display the adjusted three-dimensional linear object in the augmented reality screen, wherein the display adjustment operation includes a display position adjustment operation, a display size adjustment operation and/or a display angle adjustment operation.
  • a three-dimensional model of a linear object corresponding to the augmented reality screen is displayed in the augmented reality screen, thereby improving the stereoscopic and realistic feeling of the linear object in the augmented reality screen.
  • the adjusted three-dimensional model of the linear object is displayed in the augmented reality screen, wherein the display adjustment operation includes a display position adjustment operation, a display size adjustment operation, and/or a display angle adjustment operation.
  • the request module 610 includes a key point acquisition unit and a key point rendering unit; wherein,
  • a key point acquisition unit configured to acquire a plurality of key points of a linear object corresponding to the augmented reality picture
  • a key point rendering unit is configured to render a three-dimensional model of the linear object based on a plurality of key points.
  • the three-dimensional model is displayed in the augmented reality screen.
  • the key point rendering unit is configured to draw a circle with each key point as the center, determine multiple vertices of the three-dimensional model of the linear object based on the points on the circle, and render the three-dimensional model of the linear object based on the multiple vertices.
  • the key point rendering unit includes a rotation matrix determination subunit and a vertex rendering subunit, wherein:
  • a rotation matrix determination subunit configured to use the circle as a cross section of the three-dimensional model, and for each cross section, determine a rotation matrix corresponding to the cross section based on cross sections adjacent to the cross section;
  • the vertex rendering subunit is configured to determine the spatial coordinates of the vertices corresponding to the cross section based on the rotation matrix, and render the three-dimensional model of the linear object based on the spatial coordinates of the plurality of vertices.
  • the rotation matrix determination subunit is configured to use the vector between the center of the cross section and the center of the cross section adjacent to the cross section as a reference vector, and calculate the rotation matrix corresponding to the cross section based on the horizontal direction vector and the reference vector.
  • the vertex rendering subunit is configured to determine a to-be-rendered surface of the three-dimensional model based on a preset rendering mode of the three-dimensional model and a plurality of vertices of the three-dimensional model, wherein the to-be-rendered surface includes a cross section to be rendered and a connecting surface between two adjacent cross sections;
  • the three-dimensional model of the linear object is rendered based on the spatial coordinates of the plurality of vertices and the triangular primitives of the surface to be rendered.
  • the vertex rendering subunit can be configured to use the starting cross section and the ending cross section of the three-dimensional model as cross sections to be rendered when the preset rendering mode of the three-dimensional model is continuous rendering, and to use the connecting surfaces between all adjacent cross sections as connecting surfaces to be rendered.
  • the vertex rendering subunit can be configured to, when the preset rendering mode of the three-dimensional model is intermittent rendering, use the starting cross section, the ending cross section, and at least two cross sections other than the starting cross section and the ending cross section of the three-dimensional model as cross sections to be rendered; and determine the connecting surface to be rendered based on the cross section to be rendered, so that the connecting surface is displayed intermittently.
  • a key point acquisition unit is configured to generate multiple key points of the linear object corresponding to the augmented reality screen based on a preset algorithm; or, determine the associated objects of the linear object to be rendered in the augmented reality screen, and determine the multiple key points of the linear object corresponding to the augmented reality screen based on the motion trajectory of the associated objects.
  • the device for processing augmented reality images provided in the embodiments of the present disclosure can execute the method for processing augmented reality images provided in any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG12 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present disclosure.
  • the terminal device in the embodiment of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), vehicle-mounted terminals (e.g., vehicle-mounted navigation terminals), etc., and fixed terminals such as digital televisions (TVs), desktop computers, etc.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • vehicle-mounted terminals e.g., vehicle-mounted navigation terminals
  • fixed terminals such as digital televisions (TVs), desktop computers, etc.
  • TVs digital televisions
  • the electronic device shown in FIG12 is merely an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 700 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 702 or a program loaded from a storage device 708 to a random access memory (RAM) 703.
  • a processing device 701 e.g., a central processing unit, a graphics processing unit, etc.
  • RAM random access memory
  • Various programs and data required for the operation of the electronic device 700 are also stored in the RAM 703.
  • the processing device 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704.
  • the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 707 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 708 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 709.
  • the communication device 709 may allow the electronic device 700 to communicate wirelessly or wired with other devices to exchange data.
  • FIG. 12 shows an electronic device 700 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from a network through a communication device 709, or installed from a storage device 708, or installed from a ROM 702.
  • the processing device 701 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the electronic device provided in the embodiment of the present disclosure and the method for processing the augmented reality image provided in the above embodiment belong to the same inventive concept.
  • the technical details not fully described in this embodiment can be referred to the above embodiment, and this embodiment has the same beneficial effects as the above embodiment.
  • the embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for processing an augmented reality image provided by the above embodiments is implemented.
  • the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM) or flash memory, an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • Computer readable signal media may also be any computer readable medium other than computer readable storage media, which may send, propagate, or transmit programs for use by or in conjunction with an instruction execution system, apparatus, or device.
  • the program code contained on the computer readable medium may be transmitted using any suitable medium, including but not limited to: wires, optical cables, radio frequency (RF), etc., or any suitable combination of the above.
  • the client and the server may communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network).
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include a local area network (LAN), a wide area network (WAN), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
  • the computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device in response to a rendering trigger request for an augmented reality screen, displays a three-dimensional model of a linear object corresponding to the augmented reality screen in the augmented reality screen; in response to a display adjustment operation for the linear object, displays the adjusted three-dimensional model of the linear object in the augmented reality screen, wherein the display adjustment operation includes a display position adjustment operation, a display size adjustment operation and/or a display angle adjustment operation.
  • the storage medium may be a non-transitory storage medium.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including, but not limited to, object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., via the Internet using an Internet service provider
  • each box in the flowchart or block diagram may represent a module, a program segment, or a portion of a code, which contains one or more executable instructions for implementing a specified logical function.
  • the functions marked in the boxes may also occur in an order different from that marked in the accompanying drawings. For example, two boxes represented in succession may actually be executed substantially in parallel, and they may sometimes be executed in the opposite order, depending on the functions involved.
  • each box in the block diagram and/or flowchart, and the combination of boxes in the block diagram and/or flowchart, may be executed in a specified order.
  • the functions or operations specified in the present invention may be implemented by a dedicated hardware-based system, or by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or hardware.
  • the name of a unit does not limit the unit itself in some cases.
  • the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
  • machine-readable storage media may include electrical connections based on one or more lines, portable computer disks, hard disks, random access memories (RAM), read-only memories (ROM), erasable programmable read-only memories (EPROM) or flash memories, optical fibers, portable compact disk read-only memories (CD-ROMs), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • Example 1 provides a method for processing an augmented reality image, including:
  • the adjusted three-dimensional model of the linear object is displayed in the augmented reality screen, wherein the display adjustment operation includes a display position adjustment operation, a display size adjustment operation and/or a display angle adjustment operation.
  • Example 2 provides a method for processing an augmented reality image, including:
  • the step of displaying the three-dimensional model of the linear object corresponding to the augmented reality picture in the augmented reality picture includes:
  • a three-dimensional model of the linear object is rendered based on the plurality of key points, and the three-dimensional model is displayed in the augmented reality screen.
  • Example 3 provides a method for processing an augmented reality image, including:
  • the step of rendering the three-dimensional model of the linear object based on the plurality of key points comprises:
  • a circle is drawn with the key point as the center, a plurality of vertices of the three-dimensional model of the linear object is determined based on the points on the circle, and the three-dimensional model of the linear object is rendered based on the plurality of vertices.
  • Example 4 provides a method for processing an augmented reality picture, including:
  • the step of rendering the three-dimensional model of the linear object based on the plurality of vertices comprises:
  • the spatial coordinates of the vertices corresponding to the cross section are determined based on the rotation matrix, and the three-dimensional model of the linear object is rendered based on the spatial coordinates of the plurality of vertices.
  • Example 5 provides a method for processing an augmented reality picture, including:
  • the step of determining a rotation matrix corresponding to the cross section based on a cross section adjacent to the cross section comprises:
  • a vector between the center of the cross section and the center of a cross section adjacent to the cross section is used as a reference vector, and a rotation matrix corresponding to the cross section is calculated according to a horizontal direction vector and the reference vector.
  • Example 6 provides a method for processing an augmented reality picture, including:
  • the step of rendering the three-dimensional model of the linear object based on the spatial coordinates of the plurality of vertices comprises:
  • the three-dimensional model of the linear object is rendered based on the spatial coordinates of the plurality of vertices and the triangular primitives of the surface to be rendered.
  • Example 7 provides a method for processing an augmented reality picture, including:
  • the determining the to-be-rendered surface of the three-dimensional model based on the preset rendering mode of the three-dimensional model and the plurality of vertices of the three-dimensional model includes:
  • the starting cross section and the ending cross section of the three-dimensional model are used as cross sections to be rendered, and all connecting surfaces between any two adjacent cross sections are used as connecting surfaces to be rendered.
  • Example 8 provides a method for processing an augmented reality picture, including:
  • the determining the to-be-rendered surface of the three-dimensional model based on the preset rendering mode of the three-dimensional model and the plurality of vertices of the three-dimensional model includes:
  • the starting cross section, the ending cross section and at least two cross sections other than the starting cross section and the ending cross section of the three-dimensional model are used as cross sections to be rendered; and, based on the cross sections to be rendered, the connecting surfaces to be rendered are determined so that the connecting surfaces are displayed intermittently.
  • Example 9 provides a method for processing an augmented reality picture, including:
  • the step of acquiring a plurality of key points of the linear object corresponding to the augmented reality picture includes:
  • An associated object of the linear object to be rendered in the augmented reality picture is determined, and a plurality of key points of the linear object corresponding to the augmented reality picture is determined based on a motion trajectory of the associated object.
  • Example 10 provides a device for processing an augmented reality image, including:
  • a request module configured to display a three-dimensional model of a linear object corresponding to the augmented reality picture in the augmented reality picture in response to a rendering trigger request for the augmented reality picture;
  • the display module is configured to display the adjusted three-dimensional linear object in the augmented reality screen in response to a display adjustment operation on the three-dimensional model, wherein the display adjustment operation includes a display position adjustment operation, a display size adjustment operation and/or a display angle adjustment operation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种增强现实画面的处理方法、装置、电子设备及存储介质。方法包括:响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中(S110);响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中(S120)。

Description

增强现实画面的处理方法、装置、电子设备及存储介质
本申请要求在2022年10月28日提交中国专利局、申请号为202211338450.2的中国专利申请的优先权,以上申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及计算机技术,例如涉及一种增强现实画面的处理方法、装置、电子设备及存储介质。
背景技术
增强现实(Augmented Reality,AR)技术,是一种通过实时拍摄现实世界获取现实世界画面,并在现实世界画面上叠加虚拟信息的技术。
发明内容
本公开提供了一种增强现实画面的处理方法、装置、电子设备及存储介质。
第一方面,本公开实施例提供了一种增强现实画面的处理方法,该方法包括:
响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;
响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作或显示角度调整操作中的至少一个。
第二方面,本公开实施例还提供了一种增强现实画面的处理装置,该装置包括:
请求模块,设置为响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;
显示模块,设置为响应于针对所述三维模型的显示调整操作,将调整后的所述三维线性对象显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作或显示角度调整操作中的至少一个。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多 个处理器实现如本公开实施例中任一所述的增强现实画面的处理方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例中任一所述的增强现实画面的处理方法。
附图说明
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图;
图2为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图;
图3为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图;
图4为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图;
图5为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图;
图6A为本公开实施例所提供的一种增强现实画面的处理方法的线性对象的三维模型的渲染的示例图;
图6B为本公开实施例所提供的另一种增强现实画面的处理方法的线性对象的三维模型的渲染的示例图;
图7为本公开实施例所提供的一种增强现实画面的处理方法的线性对象的关键点的获取方式的流程示例图;
图8为本公开实施例所提供的一种增强现实画面的处理方法的线性对象的三维模型的横截面上的三角图元的示例图;
图9为本公开实施例所提供的一种增强现实画面的处理方法的线性对象的三维模型的连接面上的三角图元的示例图;
图10为本公开实施例所提供的一种增强现实画面的处理方法的线性对象的三维模型的间断式渲染的示例图;
图11是本公开实施例所提供的一种增强现实画面的处理装置结构示意图;
图12是本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
采用二维线条来丰富和辅助增强现实画面,即将二维线条绘制于增强现实画面。然而,这种处理方式往往使得增强现实画面中线条显得生硬、缺乏立体感,从而影响画面显示质量,而且一般情况下,二维线条的展示方式往往与画面的显 示方式相对固定,不能很好地满足用户的个性化交互需求,影响用户体验。
为应对上述情况,本公开实施例提供了一种增强现实画面的处理方法、装置、电子设备及存储介质。
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
可以理解的是,在使用本公开各实施例之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开实施例的操作的电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。
作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈 现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向电子设备提供个人信息的选择控件。
可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。
可以理解的是,本实施例所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
图1为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图,本公开实施例可以对增强现实画面进行渲染,该方法可以由增强现实画面的处理装置来执行,该装置可以通过软件和/或硬件的形式实现,例如,通过电子设备来实现,该电子设备可以是移动终端、个人计算机(Personal Computer,PC)端或服务器等。
如图1所示,本实施例的方法可包括:
S110、响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中。
其中,增强现实画面一般为真实的环境和虚拟的物体同时存在同一空间的画面,也就是说,将虚拟的物体应用到真实的环境中后生成的画面。本公开实施例中,虚拟的物体可以为线性对象的三维模型。线性对象可以理解为具有线性特征的对象。线性对象的形状可以包括直线、曲线以及折线中的至少一种。线性对象的形状的表现样式可以为实线,也可以为虚线。
在本公开实施例中,线性对象可以是增强现实画面的线型特效对象。例如可以是用于至少一个预设标志物体两个图像对象的标识线,或者,用于标识预设标识物的运动轨迹的轨迹线,或者,预设的线型特效对象等。需要说明的是,线型特效对象的表现形式可以有多种,可以是鞭子或棍棒等。
示例性的,在识别到真实环境中运动的预设标志物(如小球、纸飞机或飞镖等)时,可以基于预设标识物的运行轨迹确定线性对象;在检测到增强现实画面中出现两个预设标志物时,可以将所述两个预设标志物之间的连线作为线性对象;又或者,可以对渲染触发请求进行解析,得到用于描述与增强现实画面对应的线性对象的对象形状信息(如直线形状),进而可以根据预设线性对象和对象形状信息的对应关系,确定与渲染触发请求中所描述的对象形状信息对应的线性对象。
在本公开实施例中,线性对象的三维模型可以理解为用于表征线性对象的 三维特征的渲染模型,例如,直线三维模型、曲线三维模型或折线三维模型等。线性对象的三维模型的类型可以为网格类型,也可以为点云类型。也就是说,线性对象的三维模型可以为线性对象的三维网格模型,或者,可以为线性对象的三维点云模型。渲染触发请求可以理解为用于将与增强现实画面对应的线性对象的三维模型显示于增强现实画面中的触发请求。
考虑到线性对象的显示形态以及底层渲染逻辑,与增强现实画面对应的线性对象的三维模型的数量可以为一个、两个或两个以上。
在一个实施例中,在接收到针对增强现实画面的渲染触发请求后,可以基于所述渲染触发请求,从用于存储三维模型的数据库中获取与增强现实画面对应的线性对象的三维模型,也就是说,从用于存储三维模型的数据库中,读取与增强现实画面对应的线性对象的三维模型,并加载至内存。在加载完成后,可以对所述线性对象的三维模型进行渲染处理。从而使得所述线性对象的三维模型显示于增强现实画面中,提高了增强现实画面的画面质量。例如,用于存储三维模型的数据库可以是本地数据库也可以是远程数据库。
在另一个实施例中,在接收到针对增强现实画面的渲染触发请求后,可以基于渲染触发请求构建与所述增强现实画面对应的线性对象的三维模型。在模型构建完成后,可以对所述线性对象的三维模型进行渲染处理。从而使得线性对象的三维模型显示于所述增强现实画面中。例如,基于渲染触发请求构建与所述增强现实画面对应的线性对象的三维模型,包括:对渲染触发请求进行解析,可以得到显示于增强现实画面中的线性对象的三维模型的模型特征数据。进而可以基于所述模型特征数据构建与所述增强现实画面对应的线性对象的三维模型。这样处理的好处在于,能够根据个性化需求动态化绘制显示在增强现实画面中的线性对象的三维模型。
本公开实施例中,获取渲染触发请求的方式,可以为:接收用于触发渲染增强现实画面的触发操作,基于所述触发操作生成针对增强现实画面的渲染触发请求。需要说明的是,用于触发渲染增强现实画面的触发操作的触发方式可以有多种。示例性的,用于触发渲染增强现实画面的触发操作可以为作用于触发渲染增强现实画面的触发控件所生成的触发操作;或者,基于采集到的用于渲染增强现实画面的语音指令所生成的触发操作;又或者,基于采集到的用于渲染增强现实画面的图像信息所生成的触发操作。
其中,触发控件可以包括物理触发控件,和/或虚拟触发控件。物理触发控件可以是实体控件,如按压按钮、滑动按钮等。虚拟触发控件可以为触控屏上展 示的用于触发渲染增强现实画面的控件。还需要说明的是,虚拟触发控件的图标样式、显示效果以及显示位置可根据实际需求设置,在此不做限定。例如,接收用于触发渲染增强现实画面的触发操作可以是,接收作用于用以触发渲染增强现实画面的触发控件的触发操作(如,点击或按压按钮等)。
本公开实施例中,对所述线性对象的三维模型进行渲染处理的方式有多种,其在此不做限定。
作为本公开实施例的一种可选的实施方式,对所述线性对象的三维模型进行渲染处理,可以包括:对所述渲染触发请求进行解析处理,从而可以得到针对线性对象的三维模型的渲染参数。进而可以基于所述渲染参数对所述线性对象的三维模型进行渲染处理。渲染参数可以包括材质、灯光以及贴图等。
作为本公开实施例的另一种可选的实施方式,对所述线性对象的三维模型进行渲染处理,可以包括:在接收到针对增强现实画面的渲染触发请求后,基于渲染触发请求,于预先配置的渲染参数信息中读取与线性对象的三维模型对应的渲染参数。基于所述渲染参数对所述线性对象的三维模型进行渲染处理。
S120、响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中。
其中,显示调整操作可以理解为用于对显示在增强现实画面中的线性对象进行调整的操作。显示调整操作的操作方式可以有多种。示例性的,显示调整操作可以为作用于显示在增强现实画面中的线性对象的触控操作,如,单击操作、滑动操作或双击操作;或者,作用于用以对显示在增强现实画面中的线性对象进行调整的控件的点击操作;又或者,作用于用于对显示在增强现实画面中的线性对象进行调整的实体按钮的按压操作等。显示调整操作可以包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。本公开实施例中,可基于显示调整操作,不仅更加符合用户的个性化需求对线性对象的显示进行调整,而且还可以将线性对象的三维模型全方位、多角度的展示,丰富了增强现实画面。
在本公开实施例中,显示位置调整操作可以理解为用于对显示在增强现实画面中的线性对象的位置进行调整的操作。换言之,将显示在增强现实画面中当前显示位置的线性对象由当前显示位置调整到增强现实画面中的目标显示位置。目标显示位置可以理解为将显示在增强现实画面中线性对象,以线性对象的当前位置为基准位置沿某一方向移动后得到的显示位置。某一方向可以包括向左移动、向右移动、向上移动以及向下移动等。示例性的,可以将显示在增强现实画面中左下角位置的线性对象,从增强现实画面的左下角位置调整至增强现 实画面的左上角位置。
其中,显示尺寸调整操作可以理解为用于对显示在增强现实画面中的线性对象的尺寸进行调整的操作,也就是说,将显示在增强现实画面中当前显示尺寸的线性对象由当前显示尺寸调整至目标显示尺寸。显示目标尺寸可以理解为对显示在增强现实画面中线性对象进行放大或缩小后得到尺寸。显示角度调整操作可以理解为用于对显示在增强现实画面中的线性对象的角度进行调整的操作。换言之,将增强现实画面中以当前显示角度显示的线性对象,由当前显示角度调整至目标显示角度,以使线性对象以目标显示角度显示于增强现实画面中。目标显示角度可以理解为对增强现实画面中以当前显示角度显示的线性对象进行显示角度调整后得到角度。
在一个实施例中,在接收到针对线性对象的显示位置调整操作后,可以得到显示位置调整操作的显示位置操作信息。进而可以基于所述显示位置操作信息,确定增强现实画面中用于显示所述线性对象的三维模型的目标显示位置。进而可以基于所述显示位置,对所述线性对象的三维模型渲染处理。从而使得线性对象的三维模型显示于所述增强现实画面的目标显示位置。
在另一个实施例中,在接收到针对线性对象的显示角度调整操作后,可以得到显示角度调整操作的显示角度操作信息。进而可以基于显示角度操作信息确定针对线性对象的三维模型进行显示角度操作的旋转角度和旋转轴。从而可以根据所述旋转轴和旋转角度确定线性对象的三维模型的目标显示角度。进而可以将所述线性对象的三维模型以目标显示角度显示于所述增强现实画面中。
在又一个实施例中,在接收到针对线性对象的显示尺寸调整操作后,可以得到显示尺寸调整操作的显示尺寸操作信息。进而可以基于显示尺寸操作信息对线性对象的三维模型进行模型重建。从而可以得到重建后的线性对象的三维模型。可以将重建后的线性对象的三维模型渲染到增强现实画面中。从而使得重建后的线性对象的三维模型显示于增强现实画面中。
需要说明的是,在本公开实施例中,对线性对象的三维模型进行模型重建的方式有多种。作为本公开实施例的一种可选的实施方式,可以包括:基于显示尺寸操作信息,确定增强现实画面中用于显示所述线性对象的三维模型的目标显示尺寸;基于所述目标显示尺寸,对线性对象的三维模型进行模型重建。
作为本公开实施例中的另一种可选的实施方式,可以包括:确定线性对象的三维模型的当前显示尺寸;基于显示尺寸操作信息,得到相对于所述当前显示尺寸的尺寸比例。进而可以基于所述尺寸比例和所述当前显示尺寸,得到增强现实 画面中用于显示所述线性对象的三维模型的目标显示尺寸。从而将所述目标显示尺寸的线性对象的三维模型显示于增强现实画面中。
本公开实施例,通过响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中,可提升增强现实画面中线性对象的立体感和真实感。响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。本公开实施例,通过将线性对象的三维模型显示于增强现实画面,不仅更加逼真地将线性对象融入增强现实画面,而且还能够根据用户的个性化需求与线性对象进行有效的互动,提升了用户体验感。
图2为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图。本实施例在上述实施例的基础上,对如何将线性对象的三维模型显示于增强现实画面中进行示例性描述。例如,所述将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中,包括:获取与所述增强现实画面对应的线性对象的多个关键点;基于多个所述关键点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。具体实施方式可以参见本实施例的说明。其中,与前述实施例相同或相似的技术特征在此不再赘述。
如图2所示,本实施例的方法可包括:
S210、响应于针对增强现实画面的渲染触发请求,获取与所述增强现实画面对应的线性对象的多个关键点。
其中,线性对象的关键点可以理解为线性对象的特征点。线性对象的多个关键点可以描绘出线性对象的形状,如,直线、曲线或折线等。为了更加准确地体现出线性对象的每个关键点的位置,可以预先构建坐标系,从而可以使用所述坐标系下的坐标点表示出线性对象的每个关键点的位置信息。
例如,在接收到针对增强现实画面的渲染触发请求后,可以基于渲染触发请求确定与所述增强现实画面对应的线性对象。进而可以获取所述线性对象的多个关键点。
在本公开实施例中,获取与所述增强现实画面对应的线性对象的多个关键点的方式有多种。
作为本公开实施例的一种可选的实施方式,获取与所述增强现实画面对应的线性对象的多个关键点,可以包括:基于预设算法生成与所述增强现实画面对应的线性对象的多个关键点。预设算法可以为预先设置的用于生成关键点的算 法。
例如,基于预设算法生成与所述增强现实画面对应的线性对象的多个关键点,可以包括:基于预设算法随机生成与所述增强现实画面对应的线性对象的多个关键点;或者,在接收到针对增强现实画面的渲染触发请求后,可以对渲染触发请求进行解析。从而可以得到与所述增强现实画面对应的线性对象的关键点的预设绘制帧率。进而基于所述预设绘制帧率和所述预设算法,生成与所述增强现实画面对应的线性对象的多个关键点。
作为本公开实施例的另一种可选的实施方式,获取与所述增强现实画面对应的线性对象的多个关键点,可以包括:确定所述增强现实画面中待渲染的线性对象的关联对象,基于所述关联对象的运动轨迹确定与所述增强现实画面对应的线性对象的多个关键点。待渲染的线性对象可以理解为增强现实画面中需要被渲染的线性对象。关联对象可以理解为增强现实画面中与待渲染的线性对象存在关联关系的对象。示例性的,与待渲染的线性对象具有关联关系的关联对象可以为显示于增强现实画面中的纸飞机。待渲染的线性对象可以为显示于增强现实画面中纸飞机的飞行轨迹。
例如,确定所述增强现实画面中与待渲染的线性对象具有关联关系的关联对象。在确定关联对象后,可以获取所述关联对象的运动轨迹。在获取到运动轨迹后,可以对所述运动轨迹进行特征提取处理。进而可以得到所述运动轨迹的多个特征点,并将提取到的多个特征点作为与所述增强现实画面对应的线性对象的多个关键点。
示例性的,待渲染的线性对象的关联对象可以为显示于增强现实画面中飞行的飞机。待渲染的线性对象可以为增强现实画面中飞行的飞机上某一个或多个位置运动轨迹。待渲染的线性对象的多个关键点可以理解为针对增强现实画面中飞行的飞机上某一位置的运动轨迹的特征点。
作为本公开实施例的有一种可选的实施方式,获取与所述增强现实画面对应的线性对象的多个关键点,包括:确定与所述增强现实画面对应的线性对象的形状。进而可以根据线性对象的形状,获取所述线性对象的多个关键点。例如,根据线性对象的形状获取所述线性对象的多个关键点,可以包括:基于线性对象的形状,于用于存储线性对象关键点的数据库中获取与所述形状相对应的关键点;或者,基于线性对象的形状,生成线性对象的关键点。
S220、基于多个所述关键点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
例如,在得到线性对象多个所述关键点后,可以获取与多个所述关键点对应的线性对象的三维模型。进而可以将所述三维模型渲染到增强现实画面中,从而使得所述三维模型显示于所述增强现实画面中。
在本公开实施例中,获取与多个关键点对应的线性对象的三维模型的方式有多种。
作为本公开实施例的一种可选的实施方方式,可以为对多个关键点进行拟合处理。从而得到拟合结果。进而可以基于拟合结果进行模型重建。从而可以得到与多个关键点对应的线性对象的三维模型,这样做的好处在于可根据个性化需求绘制线性对象的三维模型。
作为本公开实施例的另一种可选的实施方式,可以从用于存储三维模型的数据库中,匹配与多个关键点相匹配的三维模型,将匹配到的三维模型作为与多个关键点对应的线性对象的三维模型,这样做的好处在于能够更加快速有效的得到线性对象的三维模型,提升了增强现实画面的渲染触发请求的响应速度。
S230、响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中。
本公开实施例,通过获取与所述增强现实画面对应的线性对象的多个关键点;基于多个所述关键点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中,能够实现具有针对性的获取线性对象的三维模型,从而丰富了增强现实画面。
图3为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图。本实施例在上述实施例的基础上,对如何基于多个关键点渲染线性对象的三维模型进行示例性描述。例如,所述基于多个所述关键点渲染所述线性对象的三维模型,包括:针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型。具体实施方式可以参见本实施例的说明。其中,与前述实施例相同或相似的技术特征在此不再赘述。
如图3所示,本实施例的方法可包括:
S310、响应于针对增强现实画面的渲染触发请求,获取与所述增强现实画面对应的线性对象的多个关键点。
S320、针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
例如,以所述关键点为圆心作圆,可以包括:获取与所述关键点对应的预设作圆半径。将所述关键点作为圆心。基于与所述关键点对应的预设作圆半径和所述圆心作圆。预设作圆半径可以理解为针对每个关键点预设设置的作圆的半径。需要说明的是,与每个关键点对应的预设作圆半径可以相同,也可以不相同。例如,获取与所述关键点对应的预设作圆半径,可以包括:可以对所述渲染触发请求进行解析,从而可以得到渲染触发请求中包含的与每个关键点对应的预设作圆半径;或者,获取针对所述多个关键点配置的作圆半径配置信息,所述作圆半径配置信息中配置有每个关键点的预设作圆半径;于所述作圆半径配置信息匹配与所述关键点对应的预设作圆半径。
在本公开实施例中,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点有多种方式。
作为本公开实施例中的一种可选的实施方式,以所述圆的圆心作所述圆的任一直径,以预设旋转角度(如5度、10度或15度等)旋转所述直径,在旋转过程中,确定所述直径与所述圆的各交点,将各所述交点作为所述线性对象的三维模型的多个顶点。
作为本公开实施例中的另一种可选的实施方式,于所述圆上选取任一点。以已选取的点作为定点,以所述定点作多条直线。将各直线与所述圆的交点作为所述线性对象的三维模型的多个顶点。
在本公开实施例中,基于多个所述顶点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中,可以包括:对多个所述顶点进行曲线拟合处理,进而可以将各所述顶点拟合成曲线。从而可以得到多条曲线。在得到多条曲线后,可以基于预设曲面造性方式将各曲线构建成曲面。从而可以得到多个曲面。在得到多个曲面后,可以基于各曲面构建出所述线性对象的三维模型。在构建出所述三维模型后,可以基于与所述三维模型的渲染信息(如纹理、材质、贴图、灯光以及模型骨骼动作等),对所述三维模型进行渲染处理。在渲染完成后,将已渲染的三维模型显示于增强现实画面中。
S330、响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中。
本公开实施例,针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型,能够实现对线性对象的三维模型的动态化的构建。
图4为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图。 本实施例在上述实施例的基础上,对如何基于多个顶点渲染线性对象的三维模型进行示例性描述。例如,所述基于多个所述顶点渲染所述线性对象的三维模型,包括:将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵;基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型。具体实施方式可以参见本实施例的说明。其中,与前述实施例相同或相似的技术特征在此不再赘述。
如图4所示,本实施例的方法可包括:
S410、响应于针对增强现实画面的渲染触发请求,获取与所述增强现实画面对应的线性对象的多个关键点。
S420、针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点。
S430、将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵。
在本公开实施例中,所述基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵,包括:将所述横截面的圆心以及与所述横截面相邻的横截面的圆心之间的向量作为参考向量,根据水平方向向量以及所述参考向量计算所述横截面对应的旋转矩阵。
其中,参考向量可以理解为所述横截面的圆心和与所述横截面相邻的横截面的圆心的有向线段。所述横截面的圆心和与所述横截面相邻的横截面的圆心的有向线段可以为所述横截面的圆心指向与所述横截面相邻的横截面的圆心的有向线段,也可以为与所述横截面相邻的横截面的圆心指向所述横截面的圆心的有向线段。水平方向向量可以理解为在三维坐标系下平行于X轴的任一向量。换言之,水平方向向量可以理解为在三维坐标系下垂直于由YZ平面的任一向量。YZ平面是由Y轴和Z轴构成的平面。
在本公开实施例中,根据水平方向向量以及所述参考向量计算所述横截面对应的旋转矩阵,可以包括:将水平方向向量和参考向量作为实际参数,传递给预先定义的用于计算横截面对应的旋转矩阵的旋转矩阵方法的入口参数。在参数传递完成后,执行所述旋转矩阵方法。进而可以计算出所述横截面对应的旋转矩阵。
例如,将水平方向向量和参考向量代入点积公式进行计算,得到水平方向向量和参考向量之间的旋转角度。进而可以基于所述旋转角度,确定所述横截面和 与所述横截面相邻的横截面的旋转轴。可以基于旋转角、旋转轴以及罗德里格旋转公式计算出所述横截面对应的旋转矩阵。
S440、基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
在本公开实施例中,基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,可以包括:可以确定与所述横截面相邻的横截面对应的顶点的空间坐标;进而可以基于与所述横截面相邻的横截面对应的顶点的空间坐标和所述旋转矩阵,确定所述横截面对应的顶点的空间坐标。这样处理的好处在于,可通过所述横截面的旋转矩阵矫正所述横截面的旋转朝向姿态。
例如,基于与所述横截面相邻的横截面对应的顶点的空间坐标和所述旋转矩阵,确定所述横截面对应的顶点的空间坐标,包括:根据与所述横截面相邻的横截面对应的顶点的空间坐标,可以构建与所述横截面相邻的横截面对应的顶点的坐标矩阵。进而可以将所述顶点的坐标矩阵左乘所述旋转矩阵进行矩阵计算。从而可以基于矩阵计算结果确定所述横截面对应的顶点的空间坐标。
S450、响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中。
本公开实施例,通过将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵;基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型,能够实现更加高效精准的构建线性对象的三维模型。
图5为本公开实施例所提供的一种增强现实画面的处理方法的流程示意图。本实施例在上述实施例的基础上,对如何基于多个顶点的空间坐标渲染线性对象的三维模型进行示例性描述。例如,所述基于多个所述顶点的空间坐标渲染所述线性对象的三维模型,包括:基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面;针对每个待渲染的横截面,基于位于所述横截面上的每三个所述顶点构建三角图元;针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建三角图元;基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型。具体实施方式可以参见本实施例的说明。其中,与前述实施例相同或相似的 技术特征在此不再赘述。
如图5所示,本实施例的方法可包括:
S510、响应于针对增强现实画面的渲染触发请求,获取与所述增强现实画面对应的线性对象的多个关键点。
S520、针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点。
S530、将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵。
S540、基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标。
S550、基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面。
其中,预设渲染方式可以理解为针对三维模型的多个顶点预先设置的渲染方式,可以包括连续式渲染和/或间断式渲染。连续式渲染可以用于渲染形状表现样式为实线的线性对象。间断式渲染可以用于渲染形状表现样式为虚线的线性对象。待渲染面可以理解为三维模型中需要渲染的面。待渲染面可以包括至少两个待渲染的横截面以及相邻两个所述横截面之间的连接面。示例性的,线性对象的三维模型可以为圆柱体模型,待渲染的横截面可以为圆柱体模型的两个底面,相邻两个所述横截面之间的连接面可以为位于圆柱体模型的两个底面之间的侧面。
在一个实施例中,所述基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,可以包括:在所述三维模型的预设渲染方式为连续式渲染的情况下,可以将所述三维模型的起始横截面和结束横截面作为待渲染的横截面,以及,将所有两两相邻的横截面之间的连接面均作为待渲染的连接面。其中,起始横截面可以理解为在构建三维模型的过程中第一个被构建的横截面。结束横截面可以理解为在构建三维模型的过程中最后一个被构建的横截面。
在另一个实施例中,所述基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,包括:在所述三维模型的预设渲染方式为间断式渲染的情况下,可以将所述三维模型的起始横截面、结束横截面以及除了所述起始横截面和结束横截面之外的至少两个横截面均作为待渲染的横截面;以及,基于所述待渲染的横截面确定待渲染的连接面,以使所述 连接面间断显示。
本公开实施例中,可以根据横截面的排列序号确定是否为待渲染的横截面。根据横截面的排列序号确定是否为待渲染的横截面的方式有多种。例如,可以将排列序号为奇数的横截面作为待渲染的横截面;或者,可以将排列序号为偶数的横截面作为待渲染的横截面。需要说明的是,间断式渲染可以为有规律的间断式渲染(参见图6A),也可以为无规律的间断式渲染(参见图6B)。
S560、针对每个待渲染的横截面,基于位于所述横截面上的每三个所述顶点构建三角图元。
其中,三角图元可以理解为三角面片。例如,针对每个待渲染的横截面,可以确定所述待渲染的横截面的所有顶点。可以采用预设模型面片构建算法(如区域生成算法),将位于所述横截面上所有顶点中的每三个所述顶点建立连线,从而可以构建出三角图元。
S570、针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建三角图元。
例如,针对每个待渲染的连接面,可以确定所述待渲染的连接面的所有顶点。可以采用预设模型面片构建算法,将位于所述连接面上所有顶点中的每三个所述顶点建立连线,从而可以构建出三角图元。
S580、基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
在本公开实施例中,基于多个所述顶点的空间坐标以及所述待渲染面的三角图元,可以生成所述线性对象的三维模型。在生成所述三维模型后,可以基于所述三维模型的渲染参数对所述三维模型进行渲染处理。在渲染完成后,可以将已渲染的三维模型显示于所述增强现实画面中。
为了提升三维模型在增强现实画面中的真实度,在生成所述线性对象的三维模型后,可以基于针对所述三维模型设置的模型细化参数,对所述三维模型中的三角图元进行细化处理。其中,细化参数可以包括面片形状、单个面片大小、相邻面片间的张力等。
S590、响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中。
本公开实施例,通过基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面;针对每个待渲染的横截面, 基于位于所述横截面上的每三个所述顶点构建三角图元;针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建三角图元;基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型,实现了以多种方式渲染对线性对象的三维模型进行渲染,从而可以得到线性对象的三维模型的多种展现形式,丰富了增强现实画面中的内容。
本公开实施例提供一种增强现实画面的处理方法的实例。本公开实施例中,以一个运动物体的运动轨迹作为线性对象,该运动物体的运动轨迹可以为抛物线也就是说,线性对象可以为抛物线。具体实施方式可以参见本实施例的说明。其中,与前述实施例相同或相似的技术特征在此不再赘述。
如图7所示,在接收到针对增强现实画面的渲染触发请求后,可以获取与增强现实画面对应的抛物线的多个关键点(图7中的P0、P1、P2、…….PN)。
示例性的,获取与增强现实画面对应的抛物线的多个关键点可以包括:获取运动物体的初速度向量可以为V、初始坐标可以为P(x0,y0)以及重力加速度可以为g。绘制所述运行物体运动的轨迹点的帧率可以为f。可以将初速度向量V在x轴和z轴的分量合并为一个水平方向的分量Vx,以及可以将竖直方向单独作为向量Vy。可以根据绘制帧率每隔1/f的水平方向距离计算一次竖直方向的高度。虚拟时间t增量可以为1/f。
可按照下述公式计算运动物体在竖直方向上的高度y:
y=y0+Vy*t+1/2*g*t2
可以按照下述公式计算运动物体在水平方向的距离x:
x=x0+Vx*t
在计算完成后,可以将水平距离根据初始度V分解为x轴方向距离和z轴方向距离。从而可以得到运动物体的多个轨迹点,即可以抛物线的多个关键点。
针对每个关键点,以关键点(图7中的P(n))为圆心作圆,基于位于圆上的点确定线性对象的三维模型的多个顶点(图7中的v(n)(0)、v(n)(1)、……v(n)(m));将圆作为三维模型的横截面,针对每个横截面,将横截面的圆心以及与横截面相邻的横截面的圆心之间的向量作为参考向量(图7中的计算p(n-1)和p(n)得到的向量),根据水平方向向量以及参考向量计算横截面对应的旋转矩阵(图7中M(n))。
基于旋转矩阵确定横截面对应的顶点的空间坐标。基于三维模型的预设渲染方式以及三维模型的多个顶点,确定三维模型的待渲染面;待渲染面可以包括 待渲染的横截面以及相邻两个横截面之间的连接面。针对每个待渲染的横截面,基于位于横截面上的每三个顶点构建三角图元(参见图8)。针对每个待渲染的连接面,基于位于不同横截面上的每三个顶点构建三角图元(参见图9)。
例如,在抛物线的三维模型的预设渲染方式为间断式渲染的情况下,将抛物线的三维模型的起始横截面、结束横截面以及除了起始横截面和结束横截面之外的至少两个横截面均作为待渲染的横截面;以及,基于待渲染的横截面确定待渲染的连接面,以使连接面间断显示(参见图10)。
基于多个顶点的空间坐标以及待渲染面的三角图元渲染线性对象的三维模型,将三维模型显示于增强现实画面中。
本公开实施例,通过将线性对象的三维模型显示于增强现实画面,不仅更加逼真地将线性对象融入增强现实画面,而且还能够根据用户的个性化需求与线性对象进行有效的互动,提升了用户体验感。
图11为本公开实施例所提供的一种增强现实画面的处理装置结构示意图,如图11所示,所述装置包括:请求模块610以及显示模块620。
其中,请求模块610,设置为响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;显示模块620,设置为响应于针对所述三维模型的显示调整操作,将调整后的所述三维线性对象显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。
本公开实施例,通过响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中,可提升增强现实画面中线性对象的立体感和真实感。响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。本公开实施例,通过将线性对象的三维模型显示于增强现实画面,不仅更加逼真地将线性对象融入增强现实画面,而且还能够根据用户的个性化需求与线性对象进行有效的互动,提升了用户体验感。
在上述各实施例的基础上,在一实施例中,请求模块610包括关键点获取单元和关键点渲染单元;其中,
关键点获取单元,设置为获取与所述增强现实画面对应的线性对象的多个关键点;
关键点渲染单元,设置为基于多个所述关键点渲染所述线性对象的三维模 型,将所述三维模型显示于所述增强现实画面中。
在上述各实施例的基础上,在一实施例中,关键点渲染单元,设置为针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型。
在上述各实施例的基础上,在一实施例中,关键点渲染单元包括旋转矩阵确定子单元和顶点渲染子单元,其中,
旋转矩阵确定子单元,设置为将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵;
顶点渲染子单元,设置为基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型。
在上述各实施例的基础上,在一实施例中,旋转矩阵确定子单元,设置为将所述横截面的圆心以及与所述横截面相邻的横截面的圆心之间的向量作为参考向量,根据水平方向向量以及所述参考向量计算所述横截面对应的旋转矩阵。
在上述各实施例的基础上,在一实施例中,顶点渲染子单元,设置为基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面;
针对每个待渲染的横截面,基于位于所述横截面上的每三个所述顶点构建三角图元;
针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建三角图元;
基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型。
在上述各实施例的基础上,在一实施例中,顶点渲染子单元,可以设置为在所述三维模型的预设渲染方式为连续式渲染的情况下,将所述三维模型的起始横截面和结束横截面作为待渲染的横截面,以及,将所有两两相邻的横截面之间的连接面均作为待渲染的连接面。
在上述各实施例的基础上,在一实施例中,顶点渲染子单元,可以设置为在所述三维模型的预设渲染方式为间断式渲染的情况下,将所述三维模型的起始横截面、结束横截面以及除了所述起始横截面和结束横截面之外的至少两个横截面均作为待渲染的横截面;以及,基于所述待渲染的横截面确定待渲染的连接面,以使所述连接面间断显示。
在上述各实施例的基础上,在一实施例中,关键点获取单元,设置为基于预设算法生成与所述增强现实画面对应的线性对象的多个关键点;或者,确定所述增强现实画面中待渲染的线性对象的关联对象,基于所述关联对象的运动轨迹确定与所述增强现实画面对应的线性对象的多个关键点。
本公开实施例所提供的增强现实画面的处理装置可执行本公开任意实施例所提供的增强现实画面的处理方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
图12为本公开实施例所提供的一种电子设备的结构示意图。下面参考图12,其示出了适于用来实现本公开实施例的电子设备(例如图12中的终端设备或服务器)700的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图12示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图12所示,电子设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(Read-Only Memory,ROM)702中的程序或者从存储装置708加载到随机访问存储器(Random Access Memory,RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有电子设备700操作所需的各种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(Input/Output,I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许电子设备700与其他设备进行无线或有线通信以交换数据。虽然图12示出了具有各种装置的电子设备700,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM 702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供的电子设备与上述实施例提供的增强现实画面的处理方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的增强现实画面的处理方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。
存储介质可以是非暂态(non-transitory)存储介质。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规 定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Product,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)或快闪存储器、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种增强现实画面的处理方法,包括:
响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;
响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。
根据本公开的一个或多个实施例,【示例二】提供了一种增强现实画面的处理方法,包括:
所述将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中,包括:
获取与所述增强现实画面对应的线性对象的多个关键点;
基于多个所述关键点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
根据本公开的一个或多个实施例,【示例三】提供了一种增强现实画面的处理方法,包括:
所述基于多个所述关键点渲染所述线性对象的三维模型,包括:
针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型。
根据本公开的一个或多个实施例,【示例四】提供了一种增强现实画面的处理方法,包括:
所述基于多个所述顶点渲染所述线性对象的三维模型,包括:
将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵;
基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型。
根据本公开的一个或多个实施例,【示例五】提供了一种增强现实画面的处理方法,包括:
所述基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵,包括:
将所述横截面的圆心以及与所述横截面相邻的横截面的圆心之间的向量作为参考向量,根据水平方向向量以及所述参考向量计算所述横截面对应的旋转矩阵。
根据本公开的一个或多个实施例,【示例六】提供了一种增强现实画面的处理方法,包括:
所述基于多个所述顶点的空间坐标渲染所述线性对象的三维模型,包括:
基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面;
针对每个待渲染的横截面,基于位于所述横截面上的每三个所述顶点构建三角图元;
针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建 三角图元;
基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型。
根据本公开的一个或多个实施例,【示例七】提供了一种增强现实画面的处理方法,包括:
所述基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,包括:
在所述三维模型的预设渲染方式为连续式渲染的情况下,将所述三维模型的起始横截面和结束横截面作为待渲染的横截面,以及,将所有两两相邻的横截面之间的连接面均作为待渲染的连接面。
根据本公开的一个或多个实施例,【示例八】提供了一种增强现实画面的处理方法,包括:
所述基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,包括:
在所述三维模型的预设渲染方式为间断式渲染的情况下,将所述三维模型的起始横截面、结束横截面以及除了所述起始横截面和结束横截面之外的至少两个横截面均作为待渲染的横截面;以及,基于所述待渲染的横截面确定待渲染的连接面,以使所述连接面间断显示。
根据本公开的一个或多个实施例,【示例九】提供了一种增强现实画面的处理方法,包括:
所述获取与所述增强现实画面对应的线性对象的多个关键点,包括:
基于预设算法生成与所述增强现实画面对应的线性对象的多个关键点;
或者,
确定所述增强现实画面中待渲染的线性对象的关联对象,基于所述关联对象的运动轨迹确定与所述增强现实画面对应的线性对象的多个关键点。
根据本公开的一个或多个实施例,【示例十】提供了一种增强现实画面的处理装置,包括:
请求模块,设置为响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;
显示模块,设置为响应于针对所述三维模型的显示调整操作,将调整后的所述三维线性对象显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作和/或显示角度调整操作。
本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的实施例,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它实施例。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的实施例。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (20)

  1. 一种增强现实画面的处理方法,包括:
    响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;
    响应于针对所述线性对象的显示调整操作,将调整后的所述线性对象的三维模型显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作或显示角度调整操作中的至少一个。
  2. 根据权利要求1所述的增强现实画面的处理方法,其中,所述将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中,包括:
    获取与所述增强现实画面对应的线性对象的多个关键点;
    基于多个所述关键点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
  3. 根据权利要求2所述的增强现实画面的处理方法,其中,所述基于多个所述关键点渲染所述线性对象的三维模型,包括:
    针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型。
  4. 根据权利要求3所述的增强现实画面的处理方法,其中,所述基于多个所述顶点渲染所述线性对象的三维模型,包括:
    将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵;
    基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型。
  5. 根据权利要求4所述的增强现实画面的处理方法,其中,所述基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵,包括:
    将所述横截面的圆心以及与所述横截面相邻的横截面的圆心之间的向量作为参考向量,根据水平方向向量以及所述参考向量计算所述横截面对应的旋转矩阵。
  6. 根据权利要求4所述的增强现实画面的处理方法,其中,所述基于多个所述顶点的空间坐标渲染所述线性对象的三维模型,包括:
    基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面;
    针对每个待渲染的横截面,基于位于所述横截面上的每三个所述顶点构建三角图元;
    针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建三角图元;
    基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型。
  7. 根据权利要求6所述的增强现实画面的处理方法,其中,所述基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,包括:
    在所述三维模型的预设渲染方式为连续式渲染的情况下,将所述三维模型的起始横截面和结束横截面作为待渲染的横截面,以及,将所有两两相邻的横截面之间的连接面分别作为待渲染的连接面。
  8. 根据权利要求6所述的增强现实画面的处理方法,其中,所述基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,包括:
    在所述三维模型的预设渲染方式为间断式渲染的情况下,将所述三维模型的起始横截面、结束横截面以及除了所述起始横截面和结束横截面之外的至少两个横截面分别作为待渲染的横截面;以及,基于所述待渲染的横截面确定待渲染的连接面,以使所述连接面间断显示。
  9. 根据权利要求2所述的增强现实画面的处理方法,其中,所述获取与所述增强现实画面对应的线性对象的多个关键点,包括:
    基于预设算法生成与所述增强现实画面对应的线性对象的多个关键点;
    或者,
    确定所述增强现实画面中待渲染的线性对象的关联对象,基于所述关联对象的运动轨迹确定与所述增强现实画面对应的线性对象的多个关键点。
  10. 一种增强现实画面的处理装置,包括:
    请求模块,设置为响应于针对增强现实画面的渲染触发请求,将与所述增强现实画面对应的线性对象的三维模型显示于所述增强现实画面中;
    显示模块,设置为响应于针对所述三维模型的显示调整操作,将调整后的所述三维线性对象显示于所述增强现实画面中,其中,所述显示调整操作包括显示位置调整操作、显示尺寸调整操作或显示角度调整操作中的至少一个。
  11. 根据权利要求10所述的增强现实画面的处理装置,其中,所述请求模 块包括关键点获取单元和关键点渲染单元;其中,
    所述关键点获取单元,设置为获取与所述增强现实画面对应的线性对象的多个关键点;
    所述关键点渲染单元,设置为基于多个所述关键点渲染所述线性对象的三维模型,将所述三维模型显示于所述增强现实画面中。
  12. 根据权利要求11所述的增强现实画面的处理装置,其中,所述关键点渲染单元,设置为针对每个所述关键点,以所述关键点为圆心作圆,基于位于所述圆上的点确定所述线性对象的三维模型的多个顶点,基于多个所述顶点渲染所述线性对象的三维模型。
  13. 根据权利要求12所述的增强现实画面的处理装置,其中,所述关键点渲染单元包括旋转矩阵确定子单元和顶点渲染子单元,其中,
    所述旋转矩阵确定子单元,设置为将所述圆作为所述三维模型的横截面,针对每个横截面,基于与所述横截面相邻的横截面确定所述横截面对应的旋转矩阵;
    所述顶点渲染子单元,设置为基于所述旋转矩阵确定所述横截面对应的顶点的空间坐标,基于多个所述顶点的空间坐标渲染所述线性对象的三维模型。
  14. 根据权利要求13所述的增强现实画面的处理装置,其中,所述旋转矩阵确定子单元,设置为将所述横截面的圆心以及与所述横截面相邻的横截面的圆心之间的向量作为参考向量,根据水平方向向量以及所述参考向量计算所述横截面对应的旋转矩阵。
  15. 根据权利要求13所述的增强现实画面的处理装置,其中,所述顶点渲染子单元,设置为基于所述三维模型的预设渲染方式以及所述三维模型的多个所述顶点,确定所述三维模型的待渲染面,其中,所述待渲染面包括待渲染的横截面以及相邻两个所述横截面之间的连接面;
    针对每个待渲染的横截面,基于位于所述横截面上的每三个所述顶点构建三角图元;
    针对每个待渲染的连接面,基于位于不同横截面上的每三个所述顶点构建三角图元;
    基于多个所述顶点的空间坐标以及所述待渲染面的三角图元渲染所述线性对象的三维模型。
  16. 根据权利要求15所述的增强现实画面的处理装置,其中,所述顶点渲染子单元,设置为在所述三维模型的预设渲染方式为连续式渲染的情况下,将所 述三维模型的起始横截面和结束横截面作为待渲染的横截面,以及,将所有两两相邻的横截面之间的连接面分别作为待渲染的连接面。
  17. 根据权利要求15所述的增强现实画面的处理装置,其中,所述顶点渲染子单元,设置为在所述三维模型的预设渲染方式为间断式渲染的情况下,将所述三维模型的起始横截面、结束横截面以及除了所述起始横截面和结束横截面之外的至少两个横截面分别作为待渲染的横截面;以及,基于所述待渲染的横截面确定待渲染的连接面,以使所述连接面间断显示。
  18. 根据权利要求11所述的增强现实画面的处理装置,其中,所述关键点获取单元,设置为基于预设算法生成与所述增强现实画面对应的线性对象的多个关键点;或者,确定所述增强现实画面中待渲染的线性对象的关联对象,基于所述关联对象的运动轨迹确定与所述增强现实画面对应的线性对象的多个关键点。
  19. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9中任一所述的增强现实画面的处理方法。
  20. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-9中任一所述的增强现实画面的处理方法。
PCT/CN2023/125332 2022-10-28 2023-10-19 增强现实画面的处理方法、装置、电子设备及存储介质 WO2024088144A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211338450.2A CN116030221A (zh) 2022-10-28 2022-10-28 增强现实画面的处理方法、装置、电子设备及存储介质
CN202211338450.2 2022-10-28

Publications (1)

Publication Number Publication Date
WO2024088144A1 true WO2024088144A1 (zh) 2024-05-02

Family

ID=86071271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/125332 WO2024088144A1 (zh) 2022-10-28 2023-10-19 增强现实画面的处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN116030221A (zh)
WO (1) WO2024088144A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030221A (zh) * 2022-10-28 2023-04-28 北京字跳网络技术有限公司 增强现实画面的处理方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190217175A1 (en) * 2019-03-20 2019-07-18 Swift Tech Interactive AB Systems for facilitating practice of bowling and related methods
CN112529997A (zh) * 2020-12-28 2021-03-19 北京字跳网络技术有限公司 烟花视觉效果的生成方法、视频生成方法、电子设备
CN112700517A (zh) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 生成烟花视觉效果的方法、电子设备、存储介质
CN114332323A (zh) * 2021-12-24 2022-04-12 北京字跳网络技术有限公司 一种粒子效果渲染方法、装置、设备及介质
CN114567805A (zh) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 确定特效视频的方法、装置、电子设备及存储介质
CN115063518A (zh) * 2022-06-08 2022-09-16 Oppo广东移动通信有限公司 轨迹渲染方法、装置、电子设备及存储介质
CN116030221A (zh) * 2022-10-28 2023-04-28 北京字跳网络技术有限公司 增强现实画面的处理方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190217175A1 (en) * 2019-03-20 2019-07-18 Swift Tech Interactive AB Systems for facilitating practice of bowling and related methods
CN112529997A (zh) * 2020-12-28 2021-03-19 北京字跳网络技术有限公司 烟花视觉效果的生成方法、视频生成方法、电子设备
CN112700517A (zh) * 2020-12-28 2021-04-23 北京字跳网络技术有限公司 生成烟花视觉效果的方法、电子设备、存储介质
CN114332323A (zh) * 2021-12-24 2022-04-12 北京字跳网络技术有限公司 一种粒子效果渲染方法、装置、设备及介质
CN114567805A (zh) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 确定特效视频的方法、装置、电子设备及存储介质
CN115063518A (zh) * 2022-06-08 2022-09-16 Oppo广东移动通信有限公司 轨迹渲染方法、装置、电子设备及存储介质
CN116030221A (zh) * 2022-10-28 2023-04-28 北京字跳网络技术有限公司 增强现实画面的处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN116030221A (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
KR102663617B1 (ko) 증강 현실 객체의 조건부 수정
WO2022105862A1 (zh) 视频生成及显示方法、装置、设备、介质
JP6181917B2 (ja) 描画システム、描画サーバ、その制御方法、プログラム、及び記録媒体
WO2024088144A1 (zh) 增强现实画面的处理方法、装置、电子设备及存储介质
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN113038264B (zh) 直播视频处理方法、装置、设备和存储介质
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
WO2022088928A1 (zh) 弹性对象的渲染方法、装置、设备及存储介质
WO2024016930A1 (zh) 特效处理方法、装置、电子设备及存储介质
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
US20230401764A1 (en) Image processing method and apparatus, electronic device and computer readable medium
CN111142967B (zh) 一种增强现实显示的方法、装置、电子设备和存储介质
WO2022012349A1 (zh) 动画处理方法、装置、电子设备及存储介质
CN114401443B (zh) 特效视频处理方法、装置、电子设备及存储介质
WO2024088141A1 (zh) 特效处理方法、装置、电子设备及存储介质
CN111862349A (zh) 虚拟画笔实现方法、装置和计算机可读存储介质
WO2023071630A1 (zh) 基于增强显示的信息交互方法、装置、设备和介质
WO2023121569A2 (zh) 粒子特效渲染方法、装置、设备及存储介质
CN114913277A (zh) 一种物体立体交互展示方法、装置、设备及介质
CN114170381A (zh) 三维路径展示方法、装置、可读存储介质及电子设备
CN108536510B (zh) 基于人机交互应用程序的实现方法和装置
WO2024066723A1 (zh) 针对虚拟场景的位置更新方法、设备、介质和程序产品
WO2023030106A1 (zh) 对象显示方法、装置、电子设备及存储介质
CN116320646A (zh) 虚拟现实直播间三维虚拟礼物的互动处理方法及其装置
CN115170715A (zh) 图像渲染方法、装置、电子设备及介质