CN113570693A - Method, device and equipment for changing visual angle of three-dimensional model and storage medium - Google Patents

Method, device and equipment for changing visual angle of three-dimensional model and storage medium Download PDF

Info

Publication number
CN113570693A
CN113570693A CN202110843503.5A CN202110843503A CN113570693A CN 113570693 A CN113570693 A CN 113570693A CN 202110843503 A CN202110843503 A CN 202110843503A CN 113570693 A CN113570693 A CN 113570693A
Authority
CN
China
Prior art keywords
dimensional model
target
transformation
subspace
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110843503.5A
Other languages
Chinese (zh)
Inventor
高玮蔚
龙琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110843503.5A priority Critical patent/CN113570693A/en
Publication of CN113570693A publication Critical patent/CN113570693A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for changing the visual angle of a three-dimensional model, wherein the method comprises the following steps: in response to the selection operation of the at least one target three-dimensional model, determining the selected at least one target three-dimensional model in the three-dimensional space displayed by the three-dimensional model; determining a subspace including the selected at least one target three-dimensional model from the stereo space; and in response to the visual angle transformation operation on the at least one target three-dimensional model, carrying out visual angle transformation on the selected at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference. The target three-dimensional model can be rapidly converted to any visual angle, the steps of visual angle conversion are reduced, and the conversion efficiency is improved.

Description

Method, device and equipment for changing visual angle of three-dimensional model and storage medium
Technical Field
The present disclosure relates to the field of special effect processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for changing a view angle of a three-dimensional model.
Background
When a user makes a special effect, great demands are placed on the visual angle transformation of an object, for example, when the user makes the special effect of a three-dimensional model, the user needs to check the three-dimensional model from multiple angles, so that the three-dimensional model can be better edited.
In this scenario, the tool in the prior art can only change the viewing angle of the whole view, and the change of the whole angle prevents the user from freely controlling the viewing angle of the subject, and in many cases, the subject (the object to be viewed) is far away from the visual range seen by the user, and thus, the requirement of the user for the subject on the adjustment of the details cannot be met. Then, the user needs to independently adjust to move or rotate each coordinate axis, so that the visual angle can be properly focused on the main object, the adjusting steps are added, and the efficiency of checking the specific visual angle of the main object is reduced.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for visual angle transformation of a three-dimensional model, which are used for improving the steps of reducing visual angle transformation and improving transformation efficiency.
In a first aspect, an embodiment of the present application provides a method for transforming a perspective of a three-dimensional model, including:
in response to the selection operation of the at least one target three-dimensional model, determining the selected at least one target three-dimensional model in the three-dimensional space displayed by the three-dimensional model;
determining a subspace including the selected at least one target three-dimensional model from the stereo space;
and in response to the visual angle transformation operation on the at least one target three-dimensional model, carrying out visual angle transformation on the selected at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference.
In some exemplary embodiments, after the performing the perspective transformation on the at least one target three-dimensional model, the method further includes:
and in the converted visual angle, if the at least one selected target three-dimensional model and the unselected three-dimensional model have a shielding relation, weakening and displaying the unselected three-dimensional model.
In some exemplary embodiments, the weakening display is performed by at least one of the following ways:
a semi-transparent display, a blurred display, or a gridded display.
In some exemplary embodiments, the performing, in response to the perspective transformation operation on the at least one target three-dimensional model, perspective transformation on the at least one target three-dimensional model with a central axis of the subspace as a transformation reference includes:
in response to a perspective transformation operation on the at least one target three-dimensional model, determining a target perspective matched with the perspective transformation operation;
and performing visual angle transformation on the at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference until the visual angle of the at least one target three-dimensional model is transformed to a target visual angle.
In some exemplary embodiments, if at least one of the selected target three-dimensional models is a dynamic three-dimensional model, before responding to the selection operation of the at least one target three-dimensional model, the method further includes:
after the pause operation of a user on the video formed by the dynamic three-dimensional model is detected, determining a static picture displayed by the paused video;
the performing perspective transformation on the at least one selected target three-dimensional model by taking the central axis of the subspace as a transformation reference in response to the perspective transformation operation on the at least one target three-dimensional model comprises:
and in response to a view angle transformation operation on the dynamic three-dimensional model through view angle transformation on the static picture, performing view angle transformation on the at least one target three-dimensional model selected to include the dynamic three-dimensional model with a central axis of the subspace as a transformation reference.
In some exemplary embodiments, after the performing the perspective transformation on the at least one selected target three-dimensional model including the dynamic three-dimensional model, the method further includes:
and after the resuming playing operation of the video formed by the paused dynamic three-dimensional model by the user is detected, controlling the dynamic three-dimensional model to dynamically display according to the target visual angle.
In some exemplary embodiments, a central axis of the subspace is a central axis perpendicular to a reference plane in the stereoscopic space.
In a second aspect, an embodiment of the present application provides a device for transforming a perspective of a three-dimensional model, including:
a target three-dimensional model determination unit configured to perform a selection operation of at least one target three-dimensional model in response to a selection operation of the at least one target three-dimensional model, and determine at least one selected target three-dimensional model in a stereoscopic space where the three-dimensional model is displayed;
a subspace determination unit configured to perform a determination of a subspace including the selected at least one target three-dimensional model from the stereo space;
a perspective transformation unit configured to perform perspective transformation on the at least one selected target three-dimensional model with a central axis of the subspace as a transformation reference in response to a perspective transformation operation on the at least one target three-dimensional model.
In some exemplary embodiments, the method further comprises, after the performing the perspective transformation on the at least one target three-dimensional model, performing:
and in the converted visual angle, if the at least one selected target three-dimensional model and the unselected three-dimensional model have a shielding relation, weakening and displaying the unselected three-dimensional model.
In some exemplary embodiments, the weakening display unit is specifically configured to perform weakening display by at least one of:
a semi-transparent display, a blurred display, or a gridded display.
In some exemplary embodiments, the view angle transformation unit is configured to perform:
in response to a perspective transformation operation on the at least one target three-dimensional model, determining a target perspective matched with the perspective transformation operation;
and performing visual angle transformation on the at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference until the visual angle of the at least one target three-dimensional model is transformed to a target visual angle.
In some exemplary embodiments, if at least one of the selected target three-dimensional models is a dynamic three-dimensional model, the method further includes a suspending unit, and before responding to the selection operation on at least one target three-dimensional model, the suspending unit is configured to perform:
after the pause operation of a user on the video formed by the dynamic three-dimensional model is detected, determining a static picture displayed by the paused video;
the view transformation unit is specifically configured to perform:
and in response to a view angle transformation operation on the dynamic three-dimensional model through view angle transformation on the static picture, performing view angle transformation on the at least one target three-dimensional model selected to include the dynamic three-dimensional model with a central axis of the subspace as a transformation reference.
In some exemplary embodiments, the method further comprises, after the perspective transformation of the at least one target three-dimensional model selected to include the dynamic three-dimensional model, performing:
and after the resuming playing operation of the video formed by the paused dynamic three-dimensional model by the user is detected, controlling the dynamic three-dimensional model to dynamically display according to the target visual angle.
In some exemplary embodiments, a central axis of the subspace is a central axis perpendicular to a reference plane in the stereoscopic space.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement any one of the above-described perspective transformation methods of the three-dimensional model.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform any one of the above-described perspective transformation methods for a three-dimensional model.
In a fifth aspect, an embodiment of the present application provides a computer program product, which includes a computer program/instruction, and when executed by a processor, implements any one of the above-described perspective transformation methods for a three-dimensional model.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the selected at least one target three-dimensional model is determined through the selection operation of the target three-dimensional model, and the subspace comprising the at least one target three-dimensional model is determined aiming at the selected at least one target three-dimensional model, so that the reference is converted into the central axis of the subspace when the visual angle conversion operation is carried out, and further the visual angle conversion of the selected at least one target three-dimensional model is realized. Therefore, the target three-dimensional model can be rapidly converted to any visual angle, the steps of visual angle conversion are reduced, and the conversion efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a perspective transformation method for a three-dimensional model according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for transforming a perspective of a three-dimensional model according to an embodiment of the present disclosure;
FIG. 3 is an interface diagram of a single three-dimensional model before view transformation according to an embodiment of the present application;
FIG. 4 is an interface diagram of a selected state during a perspective transformation of a single three-dimensional model according to an embodiment of the present application;
FIG. 5 is an interface diagram of a single three-dimensional model after view transformation according to an embodiment of the present application;
FIG. 6 is an interface diagram of another single three-dimensional model according to an embodiment of the present application before perspective transformation;
FIG. 7 is an interface diagram of a selected state during a perspective transformation of another single three-dimensional model according to an embodiment of the present application;
FIG. 8 is an interface diagram of another single three-dimensional model after perspective transformation according to an embodiment of the present application;
FIG. 9 is an interface diagram of a multi-three-dimensional model before view transformation according to an embodiment of the present application;
FIG. 10 is an interface diagram of a selected state during view transformation of a multi-three-dimensional model according to an embodiment of the present application;
FIG. 11 is an interface diagram of a multi-three-dimensional model after view transformation according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a three-dimensional model of eyeglass effects provided in accordance with an embodiment of the present application;
fig. 13 is a schematic structural diagram of a perspective transformation apparatus for a three-dimensional model according to an embodiment of the present application;
fig. 14 is a schematic view of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
(1) In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
(2) "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
(3) A server serving the terminal, the contents of the service such as providing resources to the terminal, storing terminal data; the server is corresponding to the application program installed on the terminal and is matched with the application program on the terminal to run.
(4) The terminal device may refer to an APP (Application) of a software class, or may refer to a client. The system is provided with a visual display interface and can interact with a user; is corresponding to the server, and provides local service for the client. For software applications, except some applications that are only run locally, the software applications are generally installed on a common client terminal and need to be run in cooperation with a server terminal. After the internet has developed, more common applications include e-mail clients for e-mail receiving and sending, and instant messaging clients. For such applications, a corresponding server and a corresponding service program are required in the network to provide corresponding services, such as database services, configuration parameter services, and the like, so that a specific communication connection needs to be established between the client terminal and the server terminal to ensure the normal operation of the application program.
(5) AR (Augmented Reality) special effects: the additional styles of the works when the camera shoots comprise a filter, face deformation, makeup beautifully, face sticker following and the like.
(6) AR special effect tool: a tool for designing the AR special effect enables a user to design the AR content, form, playing method and the like in camera shooting.
In a specific practice process, when a user makes a special effect, great demands are placed on the visual angle transformation of an object, for example, when the user makes the special effect of a three-dimensional model, the user needs to check and/or compare the three-dimensional model from multiple angles, so that the three-dimensional model can be better edited.
In this scenario, the tool in the prior art can only change the viewing angle of the whole view, and the change of the whole angle makes the user unable to freely control the viewing angle of the subject, and in many cases, the subject is far away from the visual range seen by the user. Thus, the user demand cannot be satisfied in detail.
When the viewing angle of a 3D (3-Dimension) special effect making tool in the related art is converted, only the whole viewing angle conversion is performed, and the user cannot freely control the viewing angle of the main object due to the whole viewing angle conversion, so that the main object is far away from the visual range seen by the user in many cases. Then, the user needs to autonomously adjust the movement and rotation of the x, y, and z axes to properly focus the viewing angle on the subject. The adjustment steps are added, and the use efficiency is reduced.
For this purpose, the application provides a method for changing the visual angle of a three-dimensional model, which responds to the selection operation of at least one target three-dimensional model, and determines the selected at least one target three-dimensional model in the three-dimensional space displayed by the three-dimensional model, for example, a user selects one target three-dimensional model through a clicking operation; determining a subspace including the selected at least one target three-dimensional model from the stereo space, such as a cuboid stereo space surrounding the target three-dimensional model; and in response to the visual angle transformation operation on the at least one target three-dimensional model, carrying out visual angle transformation on the selected at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference. The visual angle conversion operation is performed in such a way, the steps of visual angle conversion are reduced, and the conversion efficiency is improved.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Fig. 1 is a schematic view of an application scenario of a method for transforming a perspective of a three-dimensional model according to an embodiment of the present application. The application scenario includes a plurality of terminal apparatuses 101 (including terminal apparatus 101-1, terminal apparatus 101-2, … … terminal apparatus 101-n), and a perspective transformation server 102 of a three-dimensional model. The terminal device 101 and the perspective transformation server 102 of the three-dimensional model are connected through a wireless or wired network, and the terminal device 101 includes but is not limited to an electronic device such as a desktop computer, a mobile phone, a mobile computer, a tablet computer, a media player, a smart wearable device, and a smart television. The perspective transformation server 102 of the three-dimensional model may be a server, a server cluster composed of a plurality of servers, or a cloud computing center. The perspective transformation server 102 of the three-dimensional model may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
The terminal equipment 101-1 displays each three-dimensional model in a three-dimensional space, and if the user 1 selects at least one three-dimensional model, the selected three-dimensional model is determined to be a target three-dimensional model in response to the selection operation; the terminal equipment 101-1 determines a subspace including at least one selected three-dimensional target three-dimensional model from the three-dimensional space; the terminal device 101-1 performs perspective transformation on the selected at least one target three-dimensional model with the central axis of the subspace as a transformation reference in response to the perspective transformation operation of the at least one target three-dimensional model. The terminal device 101-1 presents the result of the perspective transformation to the user 1.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenario shown in fig. 1, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
The following describes the technical solution provided in the embodiment of the present application with reference to the application scenario shown in fig. 1.
Referring to fig. 2, an embodiment of the present application provides a method for transforming a perspective of a three-dimensional model, including the following steps:
s201, responding to the selection operation of at least one target three-dimensional model, and determining the selected at least one target three-dimensional model in the three-dimensional space displayed by the three-dimensional model.
S202, determining a subspace including the selected at least one target three-dimensional model from the stereo space.
And S203, responding to the visual angle transformation operation of the at least one target three-dimensional model, and carrying out visual angle transformation on the selected at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference.
The selected at least one target three-dimensional model is determined through the selection operation of the target three-dimensional model, and the subspace comprising the at least one target three-dimensional model is determined aiming at the selected at least one target three-dimensional model, so that the reference is converted into the central axis of the subspace when the visual angle conversion operation is carried out, and further the visual angle conversion of the selected at least one target three-dimensional model is realized. Therefore, the target three-dimensional model can be rapidly converted to any visual angle, the steps of visual angle conversion are reduced, and the conversion efficiency is improved.
Referring to S201, taking an AR special effect tool as an example, in a 3D view panel, the whole space is referred to as a three-dimensional space displayed by three-dimensional models, one or more three-dimensional models displayed in the three-dimensional space are displayed, and the selection operation of the user may be to select one of the three-dimensional models as a target three-dimensional model, or to select a plurality of three-dimensional models as target three-dimensional models, where the target three-dimensional model is a three-dimensional model to be subjected to view angle change.
In a specific example, if there is one target three-dimensional model, the selection operation may be a single click or a double click on the target three-dimensional model; if the number of the target three-dimensional models is two or more, the selection operation may be to select one of the target three-dimensional models in advance, hold a shortcut key, such as a shift key, on the device performing the view conversion function, and continue to select the remaining other target three-dimensional models. And for determining at least one selected target three-dimensional model, highlighting the selected target three-dimensional model in a frame highlighting manner so as to distinguish the target three-dimensional model from other unselected three-dimensional models.
Referring to S202, in order to determine a transformation reference in the perspective transformation process, after determining the selected at least one target three-dimensional model, a subspace including the selected at least one target three-dimensional model is determined from the stereoscopic space.
Wherein the subspace is the smallest space that can enclose the selected at least one target three-dimensional model, and is usually a part of the aforementioned three-dimensional space. In one specific example, the subspace may be a cuboid or other irregular polyhedron, but for computational convenience and accuracy, a cuboid is often chosen. For example, among several subspaces including the selected at least one target three-dimensional model, a rectangular parallelepiped with the smallest length, width and height is determined as the subspace.
Referring to S203, after determining at least one target three-dimensional model and a subspace including the selected at least one target three-dimensional model, in response to a perspective transformation operation on the at least one target three-dimensional model, performing perspective transformation on the selected at least one target three-dimensional model with a central axis of the subspace as a transformation reference.
Specifically, the view angle transformation operation may be clicking a view angle transformation icon or a shortcut key (displayed in the form of an icon on the current display page interface or a shortcut key on the external device) displayed in a set area of the current display interface, so that the target view angle matched with the view angle transformation operation may be determined by identifying the view angle transformation operation, for example, the view angle transformation operation is "positive" of the clicked view angle transformation icon, and the target view angle is "positive".
And then, taking the central axis of the subspace as a transformation reference, and carrying out perspective transformation on the at least one target three-dimensional model until the at least one target three-dimensional model is transformed to a target perspective. If one target three-dimensional model is selected, the target three-dimensional model is directly converted into a target view angle; and if the selected target three-dimensional models are multiple, simultaneously converting the multiple target three-dimensional models to the target view angles.
Therefore, the target visual angle matched with each visual angle conversion operation is determined, the visual angle conversion is carried out by taking the central axis of the subspace as the conversion reference, and the accuracy and the efficiency of the visual angle conversion are improved.
The central axis applied when the viewing angle is transformed can be determined as follows: one subspace may include a plurality of central axes, and taking a rectangular parallelepiped as an example, one rectangular parallelepiped may include three central axes, and in the present embodiment, the central axis of the subspace perpendicular to the reference plane in the stereoscopic space is taken as the central axis when the viewing angle is changed. The reference plane may refer to a plane in which the bottom of the three-dimensional model is located in the stereoscopic space. In this way, since the view angles are opposite and set on the reference plane, the accuracy of the view angle conversion of the target three-dimensional model is ensured by performing the view angle conversion of the target three-dimensional model in the reference plane of the three-dimensional space.
In the actual application process, the selected target three-dimensional model may be a dynamic three-dimensional model, a static three-dimensional model, or a dynamic three-dimensional model and a static three-dimensional model simultaneously, wherein the static three-dimensional model refers to a three-dimensional model in a static state, and the dynamic three-dimensional model is a three-dimensional model in an animation state.
If the selected target three-dimensional models are all static three-dimensional models, the view transformation method in the embodiment can be directly applied to carry out view transformation.
If at least one dynamic three-dimensional model is selected from the target three-dimensional models, in order to more accurately perform view angle conversion, before responding to the selection operation of at least one target three-dimensional model, the video formed by the dynamic three-dimensional models is paused, and after the system detects the pause operation of the video formed by the dynamic three-dimensional models by the user, the static picture displayed by the paused video is determined. In a specific example, during the process of displaying the video formed by the dynamic three-dimensional model, for example, during the process of displaying the animation, the user observes that the animation picture of the 70 th s has a problem and wants to adjust it locally, so the user pauses the video to obtain the static picture of the key frame of the 70 th s. In practical applications, the still picture to be paused may not necessarily be the key frame of 70s, and in this case, it can be realized by adjusting the frame adjustment button in the currently displayed page. In this way, the nature of the paused dynamic three-dimensional model is a static three-dimensional model, and at this time, the view angle conversion operation may be performed on the paused dynamic three-dimensional model in a manner of view angle conversion operation on the static three-dimensional model, that is, in response to the view angle conversion operation on the dynamic three-dimensional model by view angle conversion on the static picture, the view angle conversion may be performed on at least one target three-dimensional model selected to include the dynamic three-dimensional model with the central axis of the subspace as the conversion reference.
In this way, when the target three-dimensional model comprises a dynamic model, after the pause operation of the video formed by the dynamic model by the user is detected, the visual angle of the dynamic model is changed by changing the visual angle of the static picture. The visual angle of the static picture of the animation model which the user wants to view can be accurately changed to the visual angle required by the user, so that the user can view the dynamic model at the visual angle better.
Because the visual angle at this moment is more favorable to the user to observe, the user can carry out local or detail adjustment on the three-dimensional model after the visual angle is transformed. The user needs to continuously observe the dynamic three-dimensional model and determine other key frames which need to be locally or specifically adjusted, the user usually continues playing at the moment, and the system controls the dynamic three-dimensional model to dynamically display according to the target visual angle after detecting that the user resumes playing operation of the video formed by the paused dynamic three-dimensional model. Specifically, the resuming playing may be that the animation composed of the dynamic three-dimensional model continues to be displayed according to the inherent display mode, and the display view angle is the target view angle.
Therefore, the dynamic model after the playing operation is recovered to be dynamically displayed according to the target visual angle, the display consistency of the dynamic model is ensured, and the user can conveniently check the dynamic model and the like.
In the actual application process, after the visual angle is changed, if the selected target three-dimensional model and the unselected three-dimensional model have a shielding relation, the unselected target three-dimensional model is subjected to weakening display. In this way, the unselected three-dimensional model does not block the selected target three-dimensional model, so that the user can perform local or detailed adjustment on the target three-dimensional model under the transformed viewing angle.
Specifically, the weakening display may be performed in at least one of the following ways: a semi-transparent display, a blurred display, or a gridded display. If only one unselected three-dimensional model needs to be displayed in a weakening manner, the display in a weakening manner can be performed in one of the manners, such as transparent display; if the number of the unselected three-dimensional models which need to be displayed in a weakening manner is more than two, the weakening display modes can be the same, such as transparent display, or different weakening display modes can be respectively adopted to perform weakening display on the unselected three-dimensional models.
Therefore, the multiple weakening display modes are more helpful for highlighting the target three-dimensional model, so that a user can perform local detail adjustment on the target three-dimensional model under the changed visual angle.
In order to make the technical solution of the present application easier to understand, a specific example is described below.
In the first case, there is only one three-dimensional model in the current three-dimensional space and that three-dimensional model is the selected target three-dimensional model. Fig. 3 shows an interface diagram before the perspective transformation of a single three-dimensional model, fig. 4 shows an interface diagram of a selected state when the perspective transformation of the single three-dimensional model is performed, and fig. 5 shows an interface diagram after the perspective transformation of the single three-dimensional model is performed.
Referring to fig. 3, the respective areas or panels of the current interface are illustrated, wherein 31 denotes a panel of effect layers, for example for displaying created effect layers, each layer representing a function or element; 32 denotes a preview panel, such as for displaying design effects; 33, a 3D view panel, such as for user editing of basic editing such as three-dimensional model position, rotation, mirroring, size, animation, etc., where perspective transformation can be performed; and 34, a parameter panel, such as a specific value for setting a layer.
Specifically, each panel also has the following functions: the trigger logic setting of the special effect layers comprises the special effect layers before and after triggering, the trigger sequence of the special effect layers and the trigger conditions of the special effect layers. The method is used for enriching the shooting effect of the user, so that the user has more playing methods when shooting. Illustratively, the special effect preview refers to the overall effect of a special effect layer in a special effect layer panel; the trigger preview refers to the overall effect of a logic view on a preview panel, and comprises a special effect layer and trigger logic; the container preview refers to preview of a single container, wherein the container represents a set of special effect layers at a certain stage; partial preview refers to the partial effect of triggering a logical view on a panel, a container link that may form a path.
Referring to fig. 4, the target three-dimensional model 41 is one, which is determined to be in a selected state by thickening its outline, 42 denotes a subspace including the target three-dimensional model, 43 denotes a central axis of the subspace, and 44 denotes a view angle conversion icon. In response to a view transformation operation on the target three-dimensional model 41, such as clicking on "positive" in the view transformation icon, the view of the selected target three-dimensional model is transformed to "positive", see fig. 5.
In the second case, there are multiple three-dimensional models in the current stereo space and one selected target three-dimensional model. FIG. 6 shows another interface diagram before the perspective transformation of the single three-dimensional model, FIG. 7 shows another interface diagram of a selected state when the perspective of the single three-dimensional model is transformed, and FIG. 8 shows another interface diagram after the perspective transformation of the single three-dimensional model.
Referring to fig. 7, the target three-dimensional model is 71, that is, the target three-dimensional model is one, which is determined to be in a selected state by thickening the outline thereof, 72 denotes a subspace including the target three- dimensional model 71, 73 denotes a central axis of the subspace, and 74 denotes a view angle conversion icon. In response to a perspective transformation operation on the target three-dimensional model, such as clicking on "positive" in the perspective transformation icon 74, the perspective of the selected target three-dimensional model is transformed to "positive", see FIG. 8. In this case, if there is a blocking relationship between the selected target three-dimensional model 71 and the unselected three-dimensional model, the unselected three-dimensional model is displayed in a weakened manner, and 81 shows an effect after the semi-transparent display.
In a third case, there are two selected target three-dimensional models in the current three-dimensional space, fig. 9 shows an interface diagram before view angle conversion of a multi-three-dimensional model, fig. 10 shows an interface diagram in a selected state when view angles of the multi-three-dimensional model are converted, and fig. 11 shows an interface diagram after view angle conversion of the multi-three-dimensional model.
Referring to fig. 10, the object three-dimensional models are 101 and 102, which are determined to be in a selected state by thickening the outlines of the two object three-dimensional models, 103 denotes a subspace including the two object three-dimensional models, 104 denotes a central axis of the subspace, and 105 denotes a perspective transformation icon. In response to a view angle conversion operation on the two target three-dimensional models, such as clicking on "positive" in the view angle conversion icon, the view angle of the selected two target three-dimensional models is converted to "positive", referring to fig. 11. In this case, if there is a blocking relationship between the selected target three-dimensional model and the unselected three-dimensional model, the selected three-dimensional model is displayed in a weakened manner, and 111 represents an effect after the semi-transparent display.
Generally, one application of the three-dimensional model for perspective transformation is to make a special effect, which is taken as "wearing glasses", see fig. 12, for example, if a user cannot better adjust the special effect of "wearing glasses" at a current perspective, the current three-dimensional model needs to be adjusted to a perspective convenient for operation.
In the above example, the view transformation icon is used for example, and in the practical application process, the view transformation icon can also be implemented by shortcut keys, for example, the shortcut keys F, L, R, T, B, G represent up, left, right, down, top and bottom, respectively.
In summary, the three-dimensional model takes a "head model" three-dimensional model as an example, and the embodiments of the present application can implement the following functions:
(1) view transformation of a single three-dimensional model at rest: the user may select the three-dimensional model to be viewed, and then click on the shortcut key/direction indicator (view angle conversion icon) at the upper right corner to view the angle transition of the selected three-dimensional model. For example, pressing the "F" key after selecting the three-dimensional model of the head model, the view is automatically focused on the front side of the selected head model. And weakening the unselected object if the object has a shielding relation with other objects.
(2) View transformation of multiple three-dimensional models in a static state: the user can select a plurality of three-dimensional models simultaneously, then click and select a shortcut key/direction indicator at the upper right corner, and automatically focus on the overall angle of the selected three-dimensional models on the view. And weakening the unselected object if the object has a shielding relation with other objects.
(3) View transformation of a single three-dimensional model in an animated state: when the animation is played or paused, a user clicks a shortcut key/a direction indicator at the upper right corner after selecting the three-dimensional model, and the view automatically focuses on the angle of the selected three-dimensional model. And then, when the animation is played, the visual angle automatically follows the three-dimensional model, and the angle is kept unchanged.
(4) View transformation of multiple three-dimensional models in an animated state: when the animation is played or paused, a user clicks a shortcut key/a direction indicator at the upper right corner after selecting the three-dimensional model more than once, and the view automatically focuses on the angle of the selected three-dimensional model. And then when the animation is played, the viewing angle automatically follows the selected three-dimensional model, and the angle is kept unchanged. And weakening the unselected object if the object has a shielding relation with other objects.
View transformation of key frames in the above animation state: when the animation is played or paused, a user clicks a shortcut key/a direction indicator at the upper right corner after singly selecting the key frame, and the angle of the key frame is automatically focused on the view. To facilitate comparison and to adjust key frame positions.
Therefore, the embodiment supports animation and static fixed angle viewing of single or multiple three-dimensional models, meets more use scenes of position comparison, and improves the manufacturing efficiency.
As shown in fig. 13, based on the same inventive concept as the above-described perspective transformation method of the three-dimensional model, the embodiment of the present application further provides a perspective transformation apparatus of the three-dimensional model, which includes a target three-dimensional model determination unit 131, a subspace determination unit 132, and a perspective transformation unit 133.
A target three-dimensional model determination unit 131 configured to perform a selection operation of at least one target three-dimensional model in response to a selection operation of the at least one target three-dimensional model, and determine the selected at least one target three-dimensional model in a stereoscopic space where the three-dimensional model is displayed;
a subspace determination unit 132 configured to perform determining a subspace including the selected at least one target three-dimensional model from the stereoscopic space;
and a view transformation unit 133 configured to perform a view transformation on the selected at least one target three-dimensional model with a central axis of the subspace as a transformation reference in response to a view transformation operation on the at least one target three-dimensional model.
In some exemplary embodiments, the method further comprises, after performing the perspective transformation on the at least one target three-dimensional model, performing:
and in the converted visual angle, if the selected at least one target three-dimensional model and the unselected three-dimensional model have a shielding relation, weakening and displaying the unselected three-dimensional model.
In some exemplary embodiments, the weakening display unit is specifically configured to perform weakening display by at least one of the following ways:
a semi-transparent display, a blurred display, or a gridded display.
In some exemplary embodiments, the view angle transforming unit 133 is configured to perform:
in response to a perspective transformation operation on at least one target three-dimensional model, determining a target perspective matched with the perspective transformation operation;
and performing visual angle transformation on the at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference until the visual angle of the at least one target three-dimensional model is transformed to a target visual angle.
In some exemplary embodiments, if at least one of the selected target three-dimensional models is a dynamic three-dimensional model, the method further includes a suspending unit, and before responding to the selection operation of the at least one target three-dimensional model, the suspending unit is configured to perform:
after detecting the pause operation of a user on the video formed by the dynamic three-dimensional model, determining a static picture displayed by the paused video;
the view angle transforming unit 133 is specifically configured to perform:
and in response to a view angle transformation operation on the dynamic three-dimensional model through view angle transformation on the static picture, performing view angle transformation on at least one selected target three-dimensional model including the dynamic three-dimensional model with the central axis of the subspace as a transformation reference.
In some exemplary embodiments, the dynamic presentation unit is further included, and after performing perspective transformation on the selected at least one target three-dimensional model including the dynamic three-dimensional model, the dynamic presentation unit is configured to perform:
and after the resuming playing operation of the video formed by the paused dynamic three-dimensional model by the user is detected, controlling the dynamic three-dimensional model to dynamically display according to the target visual angle.
In some exemplary embodiments, the central axis of the subspace is a central axis perpendicular to a reference plane in the stereoscopic space.
For implementation and beneficial effects of the operations in the apparatus for transforming a perspective of a three-dimensional model, reference is made to the description of the method above, and further description is omitted here.
Having described the perspective transformation method and apparatus of a three-dimensional model according to an exemplary embodiment of the present application, an electronic device according to another exemplary embodiment of the present application will be described next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. The memory stores therein program code which, when executed by the processor, causes the processor to perform the steps of the method for perspective transformation of a three-dimensional model according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps in a perspective transformation method such as a three-dimensional model.
The electronic device 140 according to this embodiment of the present application is described below with reference to fig. 14. The electronic device 140 shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 14, the electronic device 140 is represented in the form of a general electronic device. The components of the electronic device 140 may include, but are not limited to: the at least one processor 141, the at least one memory 142, and a bus 143 that couples various system components including the memory 142 and the processor 141.
Bus 143 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 142 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1421 and/or cache memory 1422, and may further include Read Only Memory (ROM) 1423.
Memory 142 may also include a program/utility 1425 having a set (at least one) of program modules 1424, such program modules 1424 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 140 may also communicate with one or more external devices 144 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 140, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 140 to communicate with one or more other electronic devices. Such communication may be through an input/output (I/O) interface 145. Also, the electronic device 140 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 146. As shown, the network adapter 146 communicates with other modules for the electronic device 140 over the bus 143. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 140, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 142 comprising instructions, executable by the processor 141 of the apparatus to perform the method described above is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions which, when executed by the processor 141, implement any of the perspective transformation methods of the three-dimensional model as provided herein.
In an exemplary embodiment, aspects of a perspective transformation method for a three-dimensional model provided in the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the perspective transformation method for a three-dimensional model according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image scaling of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image scaling apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image scaling apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image scaling apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image scaling device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for transforming a perspective of a three-dimensional model, comprising:
in response to the selection operation of the at least one target three-dimensional model, determining the selected at least one target three-dimensional model in the three-dimensional space displayed by the three-dimensional model;
determining a subspace including the selected at least one target three-dimensional model from the stereo space;
and in response to the visual angle transformation operation on the at least one target three-dimensional model, carrying out visual angle transformation on the selected at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference.
2. The method of claim 1, wherein after the transforming the perspective of the at least one target three-dimensional model, further comprising:
and in the converted visual angle, if the at least one selected target three-dimensional model and the unselected three-dimensional model have a shielding relation, weakening and displaying the unselected three-dimensional model.
3. The method of claim 2, wherein the weakening display is performed by at least one of:
a semi-transparent display, a blurred display, or a gridded display.
4. The method according to claim 1, wherein said performing a perspective transformation on the at least one target three-dimensional model with a central axis of the subspace as a transformation reference in response to the perspective transformation operation on the at least one target three-dimensional model comprises:
in response to a perspective transformation operation on the at least one target three-dimensional model, determining a target perspective matched with the perspective transformation operation;
and performing visual angle transformation on the at least one target three-dimensional model by taking the central axis of the subspace as a transformation reference until the visual angle of the at least one target three-dimensional model is transformed to a target visual angle.
5. The method of claim 4, wherein if at least one of the selected target three-dimensional models is a dynamic three-dimensional model, before responding to the selection operation of the at least one target three-dimensional model, further comprising:
after the pause operation of a user on the video formed by the dynamic three-dimensional model is detected, determining a static picture displayed by the paused video;
the performing perspective transformation on the at least one selected target three-dimensional model by taking the central axis of the subspace as a transformation reference in response to the perspective transformation operation on the at least one target three-dimensional model comprises:
and in response to a view angle transformation operation on the dynamic three-dimensional model through view angle transformation on the static picture, performing view angle transformation on the at least one target three-dimensional model selected to include the dynamic three-dimensional model with a central axis of the subspace as a transformation reference.
6. The method according to claim 5, wherein after the perspective transformation of the at least one selected target three-dimensional model including the dynamic three-dimensional model, further comprising:
and after the resuming playing operation of the video formed by the paused dynamic three-dimensional model by the user is detected, controlling the dynamic three-dimensional model to dynamically display according to the target visual angle.
7. The method according to any one of claims 1 to 6, wherein a central axis of the subspace is a central axis perpendicular to a reference plane in the three-dimensional space.
8. A device for changing a view angle of a three-dimensional model, comprising:
a target three-dimensional model determination unit configured to perform a selection operation of at least one target three-dimensional model in response to a selection operation of the at least one target three-dimensional model, and determine at least one selected target three-dimensional model in a stereoscopic space where the three-dimensional model is displayed;
a subspace determination unit configured to perform a determination of a subspace including the selected at least one target three-dimensional model from the stereo space;
a perspective transformation unit configured to perform perspective transformation on the at least one selected target three-dimensional model with a central axis of the subspace as a transformation reference in response to a perspective transformation operation on the at least one target three-dimensional model.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement a perspective transformation method of the three-dimensional model of any one of claims 1 to 7.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the perspective transformation method of a three-dimensional model according to any one of claims 1 to 7.
CN202110843503.5A 2021-07-26 2021-07-26 Method, device and equipment for changing visual angle of three-dimensional model and storage medium Pending CN113570693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110843503.5A CN113570693A (en) 2021-07-26 2021-07-26 Method, device and equipment for changing visual angle of three-dimensional model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110843503.5A CN113570693A (en) 2021-07-26 2021-07-26 Method, device and equipment for changing visual angle of three-dimensional model and storage medium

Publications (1)

Publication Number Publication Date
CN113570693A true CN113570693A (en) 2021-10-29

Family

ID=78167268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110843503.5A Pending CN113570693A (en) 2021-07-26 2021-07-26 Method, device and equipment for changing visual angle of three-dimensional model and storage medium

Country Status (1)

Country Link
CN (1) CN113570693A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005269010A (en) * 2004-03-17 2005-09-29 Olympus Corp Image creating device, program and method
CN109242978A (en) * 2018-08-21 2019-01-18 百度在线网络技术(北京)有限公司 The visual angle regulating method and device of threedimensional model
CN110896495A (en) * 2019-11-19 2020-03-20 北京字节跳动网络技术有限公司 View adjustment method and device for target device, electronic device and medium
CN112245914A (en) * 2020-11-11 2021-01-22 网易(杭州)网络有限公司 Visual angle adjusting method and device, storage medium and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005269010A (en) * 2004-03-17 2005-09-29 Olympus Corp Image creating device, program and method
CN109242978A (en) * 2018-08-21 2019-01-18 百度在线网络技术(北京)有限公司 The visual angle regulating method and device of threedimensional model
CN110896495A (en) * 2019-11-19 2020-03-20 北京字节跳动网络技术有限公司 View adjustment method and device for target device, electronic device and medium
CN112245914A (en) * 2020-11-11 2021-01-22 网易(杭州)网络有限公司 Visual angle adjusting method and device, storage medium and computer equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
3D建模系统教学: "【3DMAX游戏建模】3dmax教程全套,3dmax零基础建模教学入门到精通", Retrieved from the Internet <URL:https://www.bilibili.com/video/av584176608/?p=30> *
SUPERWIWI: "【Unity】当其他物体挡住主角时变透明", Retrieved from the Internet <URL:https://blog.csdn.net/qq_36622009/article/details/83240594> *
ZOK93: "【Unity】当人物主角被障碍物遮挡后,将障碍物半透明化", Retrieved from the Internet <URL:https://blog.csdn.net/sinat_20559947/article/details/50427921?spm=1001.2101.3001.6661.1&utm_medium=distribute.pc_relevant_t0.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7EPaidSort-1-50427921-blog-9733277.235%5Ev43%5Epc_blog_bottom_relevance_base4&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7EPaidSort-1-50427921-blog-9733277.235%5Ev43%5Epc_blog_bottom_relevance_base4&utm_relevant_index=1> *
下载吧: "3D Studio Max调成自由视角的方法", Retrieved from the Internet <URL:https://www.xiazaiba.com/jiaocheng/53087.html> *
阿升哥哥: "unity3d 摄像机跟随角色时被物体遮挡解决方案", Retrieved from the Internet <URL:https://www.cnblogs.com/shenggege/p/4113316.html> *

Similar Documents

Publication Publication Date Title
US9727300B2 (en) Identifying the positioning in a multiple display grid
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
US11316896B2 (en) Privacy-preserving user-experience monitoring
US10552521B2 (en) Analyzing a click path in a spherical landscape viewport
US11175791B1 (en) Augmented reality system for control boundary modification
US11838491B2 (en) 3D display system and 3D display method
CN112532896A (en) Video production method, video production device, electronic device and storage medium
CN112165635A (en) Video conversion method, device, system and storage medium
US10798037B1 (en) Media content mapping
US10169899B2 (en) Reactive overlays of multiple representations using augmented reality
US10839036B2 (en) Web browser having improved navigational functionality
US11093041B2 (en) Computer system gesture-based graphical user interface control
US10222858B2 (en) Thumbnail generation for digital images
CN113570693A (en) Method, device and equipment for changing visual angle of three-dimensional model and storage medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
US10917679B2 (en) Video recording of a display device
CN110941389A (en) Method and device for triggering AR information points by focus
CN113840165B (en) Screen recording method, device, equipment and medium
CN115248655A (en) Method and apparatus for displaying information
CN116360906A (en) Interactive control method and device, head-mounted display equipment and medium
CN117148966A (en) Control method, control device, head-mounted display device and medium
CN114793274A (en) Data fusion method and device based on video projection
CN114780504A (en) Web end interaction management method and device, storage medium and electronic equipment
CN115830282A (en) Image conversion method and device and storage medium
CN114327167A (en) Resource object display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination