CN117611770A - Virtual object moving method, device, equipment and medium - Google Patents

Virtual object moving method, device, equipment and medium Download PDF

Info

Publication number
CN117611770A
CN117611770A CN202311360025.8A CN202311360025A CN117611770A CN 117611770 A CN117611770 A CN 117611770A CN 202311360025 A CN202311360025 A CN 202311360025A CN 117611770 A CN117611770 A CN 117611770A
Authority
CN
China
Prior art keywords
coordinate system
preset
virtual object
moved
under
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311360025.8A
Other languages
Chinese (zh)
Inventor
张经纬
郝雪洁
李心悦
张璐
陶明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Renyimen Technology Co ltd
Original Assignee
Shanghai Renyimen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Renyimen Technology Co ltd filed Critical Shanghai Renyimen Technology Co ltd
Priority to CN202311360025.8A priority Critical patent/CN117611770A/en
Publication of CN117611770A publication Critical patent/CN117611770A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Abstract

The application discloses a virtual object moving method, device, equipment and medium, relating to the technical field of computers, wherein the method comprises the following steps: emitting rays from an anchor point of the virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system; determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate; determining a second target movement vector of the virtual object to be moved under a preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system; and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system. The moving efficiency of the virtual object can be improved.

Description

Virtual object moving method, device, equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for moving a virtual object.
Background
With the continual updating of rendering technology and graphics hardware, more and more software applications need to present and render digital three-dimensional worlds. In order to meet the related display requirements, some operations are often required to be performed on objects in the virtual world, such as moving a camera to enable a target object to be located in the center of a scene, but moving coordinates in the virtual world cannot intuitively predict the display effect on a screen, and the positions of the objects are often required to be interactively adjusted, so that the moving efficiency is low and the user experience is poor.
In summary, how to improve the moving efficiency of virtual objects is a problem to be solved in the art.
Disclosure of Invention
In view of the foregoing, an object of the present invention is to provide a virtual object moving method, apparatus, device, and medium, which can improve the moving efficiency of a virtual object. The specific scheme is as follows:
in a first aspect, the present application discloses a virtual object moving method, including:
emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system;
determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate;
determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system.
Optionally, before emitting the ray from the anchor point of the virtual object to be moved under the preset camera coordinate system, the method further includes:
determining an anchor point of a virtual object to be moved under a preset screen coordinate system, and receiving a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
and carrying out coordinate system conversion on the anchor points under the preset screen coordinate system to obtain anchor points of the virtual object to be moved under the preset camera coordinate system.
Optionally, the emitting a ray from an anchor point of the virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the ray and the virtual object to be moved under the preset camera coordinate system includes:
determining an anchor point of a virtual object to be moved under a preset camera coordinate system as an emission origin;
and emitting rays from the emission origin to the vertical axis positive direction of the preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system.
Optionally, the determining the first target motion vector in the preset screen coordinate system based on the unit motion vector in the preset camera coordinate system and the first intersection point coordinate includes:
moving the first intersection point coordinate by a unit movement vector on a vertical plane under the preset camera coordinate system to obtain a first moved coordinate under the preset camera coordinate system;
converting the first moved coordinate and the first intersection point coordinate respectively to obtain a second moved coordinate and a second intersection point coordinate under a preset screen coordinate system;
and determining a first target movement vector under a preset screen coordinate system based on the second moved coordinate and the second intersection point coordinate.
Optionally, the converting the coordinate system of the first post-movement coordinate and the first intersection coordinate to obtain a second post-movement coordinate and a second intersection coordinate under a preset screen coordinate system includes:
and determining a projection transformation relation between a preset screen coordinate system and the preset camera coordinate system, and respectively carrying out coordinate system conversion on the first moved coordinate and the first intersection point coordinate by utilizing the projection transformation relation so as to obtain a second moved coordinate and a second intersection point coordinate under the preset screen coordinate system.
Optionally, the determining, by using the first target motion vector and the screen coordinate offset value of the virtual object to be moved in the preset screen coordinate system, the second target motion vector of the virtual object to be moved in the preset camera coordinate system includes:
and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the unit movement vector, the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system based on a triangle similarity principle.
Optionally, the transforming the coordinate system of the second target motion vector to obtain a third target motion vector under a preset three-dimensional world coordinate system includes:
determining a camera transformation inverse matrix between a preset three-dimensional world coordinate system and the preset camera coordinate system;
and converting the coordinate system of the second target movement vector by using the camera transformation inverse matrix to obtain a third target movement vector under a preset three-dimensional world coordinate system.
In a second aspect, the present application discloses a virtual object moving apparatus, comprising:
the intersection point coordinate acquisition module is used for transmitting rays from an anchor point of the virtual object to be moved under a preset camera coordinate system so as to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system;
a first vector acquisition module for determining a first target motion vector in a preset screen coordinate system based on the unit motion vector in the preset camera coordinate system and the first intersection point coordinate;
the second vector acquisition module is used for determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by utilizing the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
the virtual object moving module is used for converting the coordinate system of the second target moving vector to obtain a third target moving vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target moving vector under the preset three-dimensional world coordinate system.
In a third aspect, the present application discloses an electronic device comprising:
a memory for storing a computer program;
and a processor for executing the computer program to implement the steps of the disclosed virtual object moving method.
In a fourth aspect, the present application discloses a computer-readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the steps of the disclosed virtual object movement method.
The beneficial effects of the application are that: emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system; determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate; determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system; and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system. Therefore, the anchor point of the virtual object to be moved is taken as an emission origin to emit rays, a first intersection point coordinate under a preset camera coordinate system is obtained, a first target movement vector under the preset screen coordinate system is determined, then a second target movement vector of the virtual object to be moved under the preset camera coordinate system is determined by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system, and then the second target movement vector is subjected to coordinate system conversion to obtain a third target movement vector under the preset three-dimensional world coordinate system.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a method for moving a virtual object disclosed in the present application;
FIG. 2 is a schematic view of a movement under a specific screen coordinate system disclosed in the present application;
FIG. 3 is a schematic view of movement in a specific three-dimensional world coordinate system disclosed herein;
FIG. 4 is a flowchart of a specific virtual object movement method disclosed in the present application;
FIG. 5 is a flowchart of another specific virtual object movement method disclosed in the present application;
FIG. 6 is a schematic diagram of a virtual object mobile device disclosed in the present application;
fig. 7 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With the continual updating of rendering technology and graphics hardware, more and more software applications need to present and render digital three-dimensional worlds. In order to meet the related display requirements, some operations are often required to be performed on objects in the virtual world, such as moving a camera to enable a target object to be located in the center of a scene, but moving coordinates in the virtual world cannot intuitively predict the display effect on a screen, and the positions of the objects are often required to be interactively adjusted, so that the moving efficiency is low and the user experience is poor.
Therefore, the virtual object moving scheme is correspondingly provided, and the moving efficiency of the virtual object can be improved.
Referring to fig. 1, an embodiment of the present application discloses a virtual object moving method, including:
step S11: emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system.
In this embodiment, before emitting the ray from the anchor point of the virtual object to be moved in the preset camera coordinate system, the method further includes: determining an anchor point of a virtual object to be moved under a preset screen coordinate system, and receiving a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system; and carrying out coordinate system conversion on the anchor points under the preset screen coordinate system to obtain anchor points of the virtual object to be moved under the preset camera coordinate system. Determining a preset screen coordinate system O xy Preset three-dimensional world coordinate system W xyz Preset camera coordinate system C xyz Such as the one shown in fig. 2A specific moving diagram under a screen coordinate system, wherein the anchor point P1 of the virtual object to be moved under the preset screen coordinate system is defined as P in the anchor point coordinate under the preset screen coordinate system 1 o (x, y) screen coordinate offset value v of virtual object to be moved under preset screen coordinate system 1 o (x, y) for anchor point coordinates P under a preset screen coordinate system 1 o (x, y) performing coordinate system conversion to obtain an anchor point P of the virtual object to be moved under a preset camera coordinate system 1 c (x, y, 0). Emitting rays from an anchor point of the virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate systemWherein the first intersection point coordinate is projected to the preset screen coordinate system, and the projection and +.>Coincidence, i.e. as shown in the following formula:
step S12: a first target movement vector in a preset screen coordinate system is determined based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate.
Moving the first intersection point coordinate by a unit movement vector on a vertical plane under a preset camera coordinate system to obtain a first moved coordinate under the preset camera coordinate system; and determining a projection transformation relation between the preset screen coordinate system and the preset camera coordinate system, and respectively converting the first moved coordinate and the first intersection point coordinate by utilizing the projection transformation relation to obtain a second moved coordinate and a second intersection point coordinate under the preset screen coordinate system, and determining a first target movement vector under the preset screen coordinate system based on the second moved coordinate and the second intersection point coordinate.
Step S13: and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system.
In this embodiment, the determining, by using the first target motion vector and the screen coordinate offset value of the virtual object to be moved in the preset screen coordinate system, the second target motion vector of the virtual object to be moved in the preset camera coordinate system includes: and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the unit movement vector, the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system based on a triangle similarity principle. The method is specifically as follows:
in the method, in the process of the invention,a second target motion vector representing the virtual object to be moved under a preset camera coordinate system, +.>Representing a first object motion vector in a preset screen coordinate system,/>Representing a unit motion vector in a preset camera coordinate system,/->And representing a screen coordinate offset value of the virtual object to be moved under a preset screen coordinate system.
Step S14: and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system.
In this embodiment, the transforming the coordinate system of the second target motion vector to obtain a third target motion vector under a preset three-dimensional world coordinate system includes: determining a camera transformation inverse matrix between a preset three-dimensional world coordinate system and the preset camera coordinate system; and converting the coordinate system of the second target movement vector by using the camera transformation inverse matrix to obtain a third target movement vector under a preset three-dimensional world coordinate system. Determining a camera transformation matrix M between a preset three-dimensional world coordinate system and a preset camera coordinate system c Then the inverse of the camera transformation between the preset three-dimensional world coordinate system and the preset camera coordinate system is (M) c ) -1 Using camera transform inverse matrix (M c ) -1 For the second target motion vectorPerforming coordinate system conversion to obtain a third target motion vector under a preset three-dimensional world coordinate system, wherein the third target motion vector is specifically shown as follows:
in the method, in the process of the invention,representing a third target motion vector under a preset three-dimensional world coordinate system.
For example, a specific three-dimensional world coordinate system movement diagram shown in fig. 3 is shown, according to the anchor point P1 of the virtual object to be moved in the preset screen coordinate system and the screen coordinate offset valueObtaining a third target movement vector +.>And the virtual object to be moved can be directly and accurately controlled to move without interactive position adjustment.
The beneficial effects of the application are that: emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system; determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate; determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system; and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system. Therefore, the anchor point of the virtual object to be moved is taken as an emission origin to emit rays, a first intersection point coordinate under a preset camera coordinate system is obtained, a first target movement vector under the preset screen coordinate system is determined, then a second target movement vector of the virtual object to be moved under the preset camera coordinate system is determined by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system, and then the second target movement vector is subjected to coordinate system conversion to obtain a third target movement vector under the preset three-dimensional world coordinate system.
Referring to fig. 4, an embodiment of the present application discloses a specific virtual object moving method, which includes:
step S21: determining an anchor point of a virtual object to be moved under a preset camera coordinate system as an emission origin; and emitting rays from the emission origin to the vertical axis positive direction of the preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system.
In this embodiment, rays are emitted from the anchor point, and the ray expression is specifically as follows:
D(t)=P 1 c (x,y,0)+t*v(0,0,1);
wherein D (t) represents a function D with an argument of t, v represents a direction vector (vertical axis positive direction of a preset camera coordinate system) toward the positive direction of the z axis, i.e., the ray D (t) is a ray emitted from an anchor point toward the positive direction of the z axis, wherein P 1 c (x,y,0)=(M c ) -1 *projection -1 (P 1 o (x, y)) projection represents a projective transformation between a preset screen coordinate system and a preset camera coordinate system.
Step S22: a first target movement vector in a preset screen coordinate system is determined based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate.
Step S23: and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system.
Step S24: and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system.
Therefore, the method and the device take the anchor point of the virtual object to be moved as the transmitting origin to transmit rays, obtain the first intersection point coordinate under the preset camera coordinate system, determine the first target movement vector under the preset screen coordinate system, obtain the second target movement vector of the virtual object to be moved under the preset camera coordinate system through projection transformation between the preset camera coordinate system and the preset screen coordinate system, and obtain the third target movement vector of the virtual object to be moved under the preset three-dimensional world coordinate system through the coordinate system conversion relation between the preset camera coordinate system and the preset three-dimensional world coordinate system, so that the virtual object to be moved can be accurately controlled to move, and the movement efficiency and the user experience are improved.
Referring to fig. 5, an embodiment of the present application discloses a specific virtual object moving method, which includes:
step S31: emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system.
Step S32: and moving the first intersection point coordinate by a unit movement vector on a vertical plane under the preset camera coordinate system to obtain a first moved coordinate under the preset camera coordinate system.
In this embodiment, the vertical plane is in the preset camera coordinate systemI.e. z-value and->Z-plane equal to z-value of (c)) moves the first intersection point coordinate by a unit movement vector +.>To obtain a first post-movement coordinate +.>Wherein (1)>
Step S33: and respectively converting the first moved coordinate and the first intersection point coordinate to obtain a second moved coordinate and a second intersection point coordinate under a preset screen coordinate system.
In this embodiment, the converting the coordinate system of the first post-movement coordinate and the first intersection coordinate to obtain a second post-movement coordinate and a second intersection coordinate under a preset screen coordinate system includes: and determining a projection transformation relation between a preset screen coordinate system and the preset camera coordinate system, and respectively carrying out coordinate system conversion on the first moved coordinate and the first intersection point coordinate by utilizing the projection transformation relation so as to obtain a second moved coordinate and a second intersection point coordinate under the preset screen coordinate system.
Determining projection transformation relation project between a preset screen coordinate system and a preset camera coordinate system, and performing coordinate system conversion on a first moved coordinate and a first intersection point coordinate to obtain a second moved coordinate and a second intersection point coordinate under the preset screen coordinate system, wherein a specific formula for performing coordinate system conversion on the first moved coordinate is as follows:
in the method, in the process of the invention,representing the second post-movement coordinates under the preset screen coordinate system.
Step S34: and determining a first target movement vector under a preset screen coordinate system based on the second moved coordinate and the second intersection point coordinate.
Calculating a coordinate offset from a second intersection point coordinate to a second moved coordinate under a preset screen coordinate system, namely a first target movement vector under the preset screen coordinate system, wherein the specific formula is as follows:
in the method, in the process of the invention,expressed in a preset screen coordinate systemA first object motion vector, +.>Representing a second intersection point coordinate under a preset screen coordinate system.
Step S35: and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system.
Step S36: and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system.
Therefore, according to the method and the device, the corresponding first target movement vector under the preset screen coordinate system is determined according to the movement unit movement vector under the preset camera coordinate system z plane, the vector needing to move the virtual object to be moved under the preset camera coordinate system, namely the second target movement vector, is determined according to the triangle similarity principle, then the vector needing to move the virtual object to be moved under the preset three-dimensional world coordinate system is determined by utilizing the camera transformation inverse matrix between the preset three-dimensional world coordinate system and the preset camera coordinate system, namely the third target movement vector, and finally the virtual object to be moved can be accurately controlled to move under the preset three-dimensional world coordinate system without repeated interaction adjustment.
Referring to fig. 6, an embodiment of the present application discloses a virtual object moving apparatus, including:
the intersection point coordinate acquisition module 11 is configured to transmit a ray from an anchor point of a virtual object to be moved under a preset camera coordinate system, so as to obtain a first intersection point coordinate of the ray and the virtual object to be moved under the preset camera coordinate system;
a first vector acquisition module 12 for determining a first target motion vector in a preset screen coordinate system based on the unit motion vector in the preset camera coordinate system and the first intersection point coordinate;
a second vector obtaining module 13, configured to determine a second target motion vector of the virtual object to be moved in the preset camera coordinate system by using the first target motion vector and a screen coordinate offset value of the virtual object to be moved in the preset screen coordinate system;
the virtual object moving module 14 is configured to convert the second target moving vector to obtain a third target moving vector under a preset three-dimensional world coordinate system, and control the virtual object to be moved to move according to the third target moving vector under the preset three-dimensional world coordinate system.
The beneficial effects of the application are that: emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system; determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate; determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system; and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system. Therefore, the anchor point of the virtual object to be moved is taken as an emission origin to emit rays, a first intersection point coordinate under a preset camera coordinate system is obtained, a first target movement vector under the preset screen coordinate system is determined, then a second target movement vector of the virtual object to be moved under the preset camera coordinate system is determined by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system, and then the second target movement vector is subjected to coordinate system conversion to obtain a third target movement vector under the preset three-dimensional world coordinate system.
Further, the embodiment of the application also provides electronic equipment. Fig. 7 is a block diagram of an electronic device 20, according to an exemplary embodiment, and the contents of the diagram should not be construed as limiting the scope of use of the present application in any way.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Specifically, the method comprises the following steps: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 for realizing the following steps:
emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system;
determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate;
determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system.
In some embodiments, the processor may specifically implement the following steps by executing the computer program stored in the memory:
determining an anchor point of a virtual object to be moved under a preset screen coordinate system, and receiving a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
and carrying out coordinate system conversion on the anchor points under the preset screen coordinate system to obtain anchor points of the virtual object to be moved under the preset camera coordinate system.
In some embodiments, the processor may specifically implement the following steps by executing the computer program stored in the memory:
determining an anchor point of a virtual object to be moved under a preset camera coordinate system as an emission origin;
and emitting rays from the emission origin to the vertical axis positive direction of the preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system.
In some embodiments, the processor may specifically implement the following steps by executing the computer program stored in the memory:
moving the first intersection point coordinate by a unit movement vector on a vertical plane under the preset camera coordinate system to obtain a first moved coordinate under the preset camera coordinate system;
converting the first moved coordinate and the first intersection point coordinate respectively to obtain a second moved coordinate and a second intersection point coordinate under a preset screen coordinate system;
and determining a first target movement vector under a preset screen coordinate system based on the second moved coordinate and the second intersection point coordinate.
In some embodiments, the processor may specifically implement the following steps by executing the computer program stored in the memory:
and determining a projection transformation relation between a preset screen coordinate system and the preset camera coordinate system, and respectively carrying out coordinate system conversion on the first moved coordinate and the first intersection point coordinate by utilizing the projection transformation relation so as to obtain a second moved coordinate and a second intersection point coordinate under the preset screen coordinate system.
In some embodiments, the processor may specifically implement the following steps by executing the computer program stored in the memory:
and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the unit movement vector, the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system based on a triangle similarity principle.
In some embodiments, the processor may further include the following steps by executing the computer program stored in the memory:
determining a camera transformation inverse matrix between a preset three-dimensional world coordinate system and the preset camera coordinate system;
and converting the coordinate system of the second target movement vector by using the camera transformation inverse matrix to obtain a third target movement vector under a preset three-dimensional world coordinate system.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device; the communication interface 24 can create a data transmission channel between the electronic device and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 25 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 21 may also comprise a main processor, which is a processor for processing data in an awake state, also called CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 21 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon include an operating system 221, a computer program 222, and data 223, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device and the computer program 222, so as to implement the operation and processing of the processor 21 on the mass data 223 in the memory 22, which may be Windows, unix, linux. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the virtual object movement method performed by the electronic device as disclosed in any of the foregoing embodiments. The data 223 may include, in addition to data received by the electronic device and transmitted by the external device, data collected by the input/output interface 25 itself, and so on.
Further, the application also discloses a computer readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the virtual object movement method disclosed previously. For specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be placed in random access Memory (Random Access Memory), memory, read-Only Memory (ROM), electrically programmable EPROM (Erasable Programmable Read Only Memory), electrically erasable programmable EEPROM (Electrically Erasable Programmable Read Only Memory), registers, hard disk, removable disk, CD-ROM (CoMP 23030316act Disc Read-Only Memory), or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description of the virtual object moving method, device, equipment and medium provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above description of the examples is only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A virtual object moving method, comprising:
emitting rays from an anchor point of a virtual object to be moved under a preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system;
determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate;
determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
and converting the coordinate system of the second target movement vector to obtain a third target movement vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target movement vector under the preset three-dimensional world coordinate system.
2. The method for moving a virtual object according to claim 1, wherein before emitting rays from an anchor point of the virtual object to be moved in a preset camera coordinate system, further comprising:
determining an anchor point of a virtual object to be moved under a preset screen coordinate system, and receiving a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
and carrying out coordinate system conversion on the anchor points under the preset screen coordinate system to obtain anchor points of the virtual object to be moved under the preset camera coordinate system.
3. The method for moving a virtual object according to claim 1, wherein the emitting a ray from an anchor point of a virtual object to be moved in a preset camera coordinate system to obtain a first intersection point coordinate of the ray and the virtual object to be moved in the preset camera coordinate system includes:
determining an anchor point of a virtual object to be moved under a preset camera coordinate system as an emission origin;
and emitting rays from the emission origin to the vertical axis positive direction of the preset camera coordinate system to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system.
4. The virtual object moving method according to claim 1, wherein the determining a first target movement vector in a preset screen coordinate system based on the unit movement vector in the preset camera coordinate system and the first intersection point coordinate comprises:
moving the first intersection point coordinate by a unit movement vector on a vertical plane under the preset camera coordinate system to obtain a first moved coordinate under the preset camera coordinate system;
converting the first moved coordinate and the first intersection point coordinate respectively to obtain a second moved coordinate and a second intersection point coordinate under a preset screen coordinate system;
and determining a first target movement vector under a preset screen coordinate system based on the second moved coordinate and the second intersection point coordinate.
5. The method according to claim 4, wherein the transforming the coordinate system of the first post-movement coordinate and the first intersection coordinate to obtain a second post-movement coordinate and a second intersection coordinate under a preset screen coordinate system includes:
and determining a projection transformation relation between a preset screen coordinate system and the preset camera coordinate system, and respectively carrying out coordinate system conversion on the first moved coordinate and the first intersection point coordinate by utilizing the projection transformation relation so as to obtain a second moved coordinate and a second intersection point coordinate under the preset screen coordinate system.
6. The virtual object moving method according to claim 1, wherein the determining a second target movement vector of the virtual object to be moved in the preset camera coordinate system using the first target movement vector and a screen coordinate offset value of the virtual object to be moved in the preset screen coordinate system, comprises:
and determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by using the unit movement vector, the first target movement vector and the screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system based on a triangle similarity principle.
7. The virtual object moving method according to any one of claims 1 to 6, wherein the performing coordinate system transformation on the second target moving vector to obtain a third target moving vector in a preset three-dimensional world coordinate system includes:
determining a camera transformation inverse matrix between a preset three-dimensional world coordinate system and the preset camera coordinate system;
and converting the coordinate system of the second target movement vector by using the camera transformation inverse matrix to obtain a third target movement vector under a preset three-dimensional world coordinate system.
8. A virtual object moving apparatus, comprising:
the intersection point coordinate acquisition module is used for transmitting rays from an anchor point of the virtual object to be moved under a preset camera coordinate system so as to obtain a first intersection point coordinate of the rays and the virtual object to be moved under the preset camera coordinate system;
a first vector acquisition module for determining a first target motion vector in a preset screen coordinate system based on the unit motion vector in the preset camera coordinate system and the first intersection point coordinate;
the second vector acquisition module is used for determining a second target movement vector of the virtual object to be moved under the preset camera coordinate system by utilizing the first target movement vector and a screen coordinate offset value of the virtual object to be moved under the preset screen coordinate system;
the virtual object moving module is used for converting the coordinate system of the second target moving vector to obtain a third target moving vector under a preset three-dimensional world coordinate system, and controlling the virtual object to be moved to move according to the third target moving vector under the preset three-dimensional world coordinate system.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the virtual object movement method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program; wherein the computer program when executed by a processor implements the steps of the virtual object movement method as claimed in any one of claims 1 to 7.
CN202311360025.8A 2023-10-19 2023-10-19 Virtual object moving method, device, equipment and medium Pending CN117611770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311360025.8A CN117611770A (en) 2023-10-19 2023-10-19 Virtual object moving method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311360025.8A CN117611770A (en) 2023-10-19 2023-10-19 Virtual object moving method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117611770A true CN117611770A (en) 2024-02-27

Family

ID=89952295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311360025.8A Pending CN117611770A (en) 2023-10-19 2023-10-19 Virtual object moving method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117611770A (en)

Similar Documents

Publication Publication Date Title
US11538229B2 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111294665B (en) Video generation method and device, electronic equipment and readable storage medium
JP7228623B2 (en) Obstacle detection method, device, equipment, storage medium, and program
JP2021101365A (en) Positioning method, positioning device, and electronic device
CN112766027A (en) Image processing method, device, equipment and storage medium
CN114564106B (en) Method and device for determining interaction indication line, electronic equipment and storage medium
CN110928509A (en) Display control method, display control device, storage medium, and communication terminal
WO2024066756A1 (en) Interaction method and apparatus, and display device
WO2024067320A1 (en) Virtual object rendering method and apparatus, and device and storage medium
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
WO2023231926A1 (en) Image processing method and apparatus, device, and storage medium
CN111949816B (en) Positioning processing method, device, electronic equipment and storage medium
CN110288523B (en) Image generation method and device
CN111833391A (en) Method and device for estimating image depth information
CN117611770A (en) Virtual object moving method, device, equipment and medium
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN112308767B (en) Data display method and device, storage medium and electronic equipment
CN114092645A (en) Visual building method and device of three-dimensional scene, electronic equipment and storage medium
CN114564268A (en) Equipment management method and device, electronic equipment and storage medium
CN113313809A (en) Rendering method and device
CN111429576A (en) Information display method, electronic device, and computer-readable medium
CN113093901B (en) Panoramic picture display method, device and equipment
CN115578522B (en) Image-based color densification point cloud generation method and device
CN112306344B (en) Data processing method and mobile terminal
CN113129457B (en) Texture generation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication