CN112465901B - Information processing method and device - Google Patents

Information processing method and device Download PDF

Info

Publication number
CN112465901B
CN112465901B CN202011447733.1A CN202011447733A CN112465901B CN 112465901 B CN112465901 B CN 112465901B CN 202011447733 A CN202011447733 A CN 202011447733A CN 112465901 B CN112465901 B CN 112465901B
Authority
CN
China
Prior art keywords
virtual camera
camera
screen
vector
touch point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011447733.1A
Other languages
Chinese (zh)
Other versions
CN112465901A (en
Inventor
刘双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202011447733.1A priority Critical patent/CN112465901B/en
Publication of CN112465901A publication Critical patent/CN112465901A/en
Application granted granted Critical
Publication of CN112465901B publication Critical patent/CN112465901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an information processing method and device. In changing the position of the virtual camera in the three-dimensional virtual space, the relative distance and the relative direction between the position of the touch point displayed on the screen and the position of the virtual camera displayed on the screen are kept unchanged. For example, before the position of the virtual camera in the three-dimensional virtual space is changed, the touch point displayed on the screen is located at which position on the virtual camera displayed on the screen, and after the position of the virtual camera in the three-dimensional virtual space is changed, the touch point is still located at the position on the virtual camera, so that a user can continue to control the virtual camera without readjusting the touch point after the position of the virtual camera in the three-dimensional virtual space is changed this time, the user can control the virtual camera on the overlooking interface conveniently, sustainable control of the virtual camera can be improved, and complexity of operation of the user on the virtual camera can be reduced, so that user experience can be improved.

Description

Information processing method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information processing method and apparatus.
Background
In order to enable the client to sense the state of the house source more truly, a worker of the network platform can create a three-dimensional virtual space of the house source, the structure of the three-dimensional virtual space is the same as that of the real house source, the scaling ratio is also the same, and the worker can perform virtual decoration on the three-dimensional virtual space, so that the client can sense the more perfect state of the house source according to the decorated three-dimensional virtual space, and the possibility that the client chooses to buy or rent the house source is increased.
Disclosure of Invention
The application discloses an information processing method and device.
In a first aspect, the present application shows an information processing method for displaying at least a three-dimensional virtual space through a screen of an electronic device, where the three-dimensional virtual space includes a specific position and a virtual camera, and a lens of the virtual camera is pointed at the specific position, the method including:
acquiring an offset between an original position of a touch point on the screen and an original position of the virtual camera before changing a position of the virtual camera in the three-dimensional virtual space;
acquiring the current position of the touch point under the condition of changing the position of the virtual camera in the three-dimensional virtual space;
acquiring a target position of the virtual camera according to the specific position, the current position and the offset, and acquiring an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position and the offset;
and updating the display state of the virtual camera on the screen according to the target position and the angle variation.
In an optional implementation manner, the acquiring an offset between an original position of a touch point on the screen and an original position of the virtual camera includes:
acquiring an original touch point screen coordinate of the touch point in a screen coordinate system, and acquiring an original touch point coordinate corresponding to the original touch point screen coordinate in a camera coordinate system;
acquiring original camera coordinates of the virtual camera in a camera coordinate system;
and calculating the difference between the original touch point coordinates and the original camera coordinates to obtain the offset.
In an optional implementation manner, the obtaining the current position of the touch point includes:
acquiring the current touch point screen coordinate of the touch point in a screen coordinate system;
and acquiring the world coordinate of the current touch point corresponding to the screen coordinate of the current touch point in a world coordinate system, and taking the world coordinate as the current position of the touch point.
In an optional implementation manner, the obtaining an angle change amount of an orientation of a lens of the virtual camera according to the specific position, the current position, and the offset includes:
acquiring a first distance between the specific position and the current position;
acquiring a first vector taking the specific position as a starting point and the target position of the virtual camera as an end point according to the current position, the specific position, the first distance and the offset;
and acquiring the angle variation of the orientation of the lens of the virtual camera according to the first vector.
In an optional implementation manner, the obtaining an angle change amount of an orientation of a lens of the virtual camera according to the first vector includes:
calculating a ratio between a value of a first dimension and a value of a second dimension in the first vector;
and calculating an arc tangent function value of the ratio as the angle variation.
In an optional implementation manner, the obtaining, according to the current position, the specific position, the first distance, and the offset, a first vector with the specific position as a starting point and a target position of the virtual camera as an ending point includes:
acquiring a unit direction vector of the first vector according to the specific position, the first distance, the current position and the offset, wherein the direction of the unit direction vector comprises a direction pointing from the specific position to a target position of the virtual camera;
acquiring the length of the first vector according to the first distance and the offset;
and acquiring the first vector according to the unit direction vector and the length.
In an optional implementation manner, the obtaining a unit direction vector of the first vector according to the specific location, the first distance, the current location, and the offset includes:
generating a second vector taking the specific position as a starting point and the current position as an end point according to the specific position and the current position;
acquiring an included angle between the first vector and the second vector according to the first distance and the offset;
and acquiring the unit direction vector according to the second vector and the included angle.
In an optional implementation manner, the obtaining the target position of the virtual camera according to the specific position, the current position, and the offset includes:
acquiring a target position of the virtual camera according to the specific position and the first vector.
In an optional implementation, the target position of the virtual camera according to the specific position and the first vector includes:
and adding the numerical values corresponding to the same coordinate dimension in the specific coordinate of the specific position in the world coordinate system and the first vector to obtain the target camera world coordinate of the virtual camera in the world coordinate system, and taking the target camera world coordinate as the target position of the virtual camera.
In an optional implementation manner, the updating the display state of the virtual camera on the screen according to the target position and the angle change amount includes:
moving the virtual camera on the screen according to the target position, and rotating an orientation of a lens of the virtual camera by the angle change amount.
In one optional implementation, the target location includes target camera world coordinates of the virtual camera in a world coordinate system;
the moving the virtual camera on the screen according to the target position includes:
acquiring a target camera screen coordinate corresponding to the target camera world coordinate in a screen coordinate system;
and moving the virtual camera displayed on the screen to the area where the screen coordinates of the target camera are located.
In a second aspect, the present application shows an information processing apparatus that displays at least a three-dimensional virtual space including a specific position and a virtual camera whose lens is directed to the specific position through a screen of an electronic device, the apparatus comprising:
a first acquisition module for acquiring an offset between an original position of a touch point on the screen and an original position of the virtual camera before changing a position of the virtual camera in the three-dimensional virtual space;
a second obtaining module, configured to obtain a current position of the touch point when a position of the virtual camera in the three-dimensional virtual space is changed;
a third obtaining module, configured to obtain a target position of the virtual camera according to the specific position, the current position, and the offset, and a fourth obtaining module, configured to obtain an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position, and the offset;
and the updating module is used for updating the display state of the virtual camera on the screen according to the target position and the angle variation.
In an optional implementation manner, the first obtaining module includes:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring the original touch point screen coordinates of the touch points in a screen coordinate system;
a third acquisition unit, configured to acquire original camera coordinates of the virtual camera in a camera coordinate system;
and the calculating unit is used for calculating the difference between the original touch point coordinates and the original camera coordinates to obtain the offset.
In an optional implementation manner, the second obtaining module includes:
the fourth acquisition unit is used for acquiring the current touch point screen coordinate of the touch point in a screen coordinate system;
and the fifth acquisition unit is used for acquiring the world coordinate of the current touch point corresponding to the current touch point screen coordinate in a world coordinate system and taking the world coordinate as the current position of the touch point.
In an optional implementation manner, the fourth obtaining module includes:
a sixth acquiring unit configured to acquire a first distance between the specific position and the current position;
a seventh obtaining unit, configured to obtain, according to the current position, the specific position, the first distance, and the offset, a first vector that takes the specific position as a starting point and a target position of the virtual camera as an end point;
an eighth acquiring unit, configured to acquire an angle variation amount of an orientation of a lens of the virtual camera according to the first vector.
In an optional implementation manner, the eighth obtaining unit includes:
a first calculating subunit, configured to calculate a ratio between a value of a first dimension and a value of a second dimension in the first vector;
and the second calculating subunit is used for calculating an arc tangent function value of the ratio and taking the arc tangent function value as the angle variation.
In an optional implementation manner, the seventh obtaining unit includes:
a first obtaining subunit, configured to obtain a unit direction vector of the first vector according to the specific position, the first distance, the current position, and the offset, where a direction of the unit direction vector includes a direction from the specific position to a target position of the virtual camera;
a second obtaining subunit, configured to obtain a length of the first vector according to the first distance and the offset;
and the third acquisition subunit is used for acquiring the first vector according to the unit direction vector and the length.
In an optional implementation manner, the first obtaining subunit is specifically configured to include: generating a second vector taking the specific position as a starting point and the current position as an end point according to the specific position and the current position; acquiring an included angle between the first vector and the second vector according to the first distance and the offset; and acquiring the unit direction vector according to the second vector and the included angle.
In an optional implementation manner, the third obtaining module includes:
a ninth acquisition unit to acquire a target position of the virtual camera according to the specific position and the first vector.
In an optional implementation manner, the ninth obtaining unit is specifically configured to: and adding the numerical values corresponding to the same coordinate dimension in the specific coordinate of the specific position in the world coordinate system and the first vector to obtain the target camera world coordinate of the virtual camera in the world coordinate system, and taking the target camera world coordinate as the target position of the virtual camera.
In an optional implementation manner, the update module includes:
a moving unit for moving the virtual camera on the screen according to the target position, and a rotating unit for rotating the orientation of the lens of the virtual camera by the angle variation.
In one optional implementation, the target location includes target camera world coordinates of the virtual camera in a world coordinate system;
the mobile unit includes:
the fourth acquisition subunit is used for acquiring the corresponding target camera screen coordinates of the target camera world coordinates in a screen coordinate system;
and the moving subunit is used for moving the virtual camera displayed on the screen to the area where the screen coordinates of the target camera are located.
In a third aspect, the present application shows an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the information processing method according to the first aspect.
In a fourth aspect, the present application shows a non-transitory computer-readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the information processing method according to the first aspect.
In a fifth aspect, the present application shows a computer program product, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the information processing method according to the first aspect.
The technical scheme provided by the application can comprise the following beneficial effects:
in the present application, before changing the position of the virtual camera in the three-dimensional virtual space, the offset between the original position of the touch point on the screen and the original position of the virtual camera is acquired. In the case of changing the position of the virtual camera in the three-dimensional virtual space, the current position of the touch point is acquired. The method includes the steps of obtaining a target position of the virtual camera according to the specific position, the current position and the offset, and obtaining an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position and the offset. And updating the display state of the virtual camera on the screen according to the target position and the angle variation.
Through the application, in the process of changing the position of the virtual camera in the three-dimensional virtual space, the relative distance and the relative direction between the position of the touch point displayed on the screen of the electronic equipment and the position of the virtual camera displayed on the screen of the electronic equipment are kept unchanged. For example, in which part of the virtual camera displayed on the screen the touch point displayed on the screen is located before the position of the virtual camera in the three-dimensional virtual space is changed, the touch point displayed on the screen is still located on the part of the virtual camera displayed on the screen and is not located outside the virtual camera displayed on the screen after the position of the virtual camera in the three-dimensional virtual space is changed. Therefore, the user can continuously control the virtual camera without readjusting the touch point after changing the position of the virtual camera in the three-dimensional virtual space, the user can conveniently control the virtual camera on the overlooking interface, for example, the position of the virtual camera in the three-dimensional virtual space is convenient to change, so that the control sustainability of the virtual camera can be improved, the operation complexity of the user on the virtual camera can be reduced, and the user experience can be improved.
Drawings
FIG. 1 is a schematic view of an interface of the present application.
FIG. 2 is a flow chart of steps of an information processing method of the present application.
FIG. 3 is a schematic view of an interface of the present application.
FIG. 4 is a flow chart of steps of an information processing method of the present application.
FIG. 5 is a schematic view of an interface of the present application.
Fig. 6 is a block diagram of a configuration of an information processing apparatus according to the present application.
Fig. 7 is a block diagram of an electronic device shown in the present application.
Fig. 8 is a block diagram of an electronic device shown in the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
When a user (worker) virtually decorates a three-dimensional virtual space, it is generally necessary to arrange a home decoration model and the like, for example, a sofa model, a wardrobe model, a television cabinet model, a bed model, a television model, an air-conditioning model, a table model and the like, in the three-dimensional virtual space.
Before arranging the home decoration model in the three-dimensional virtual space, a user may perform fine adjustment on the home decoration model, for example, fine adjustment on the style, texture, color, and pattern of the outer surface of the home decoration model.
In one possible case, the home decoration model may comprise a plurality of outer surfaces, for example more than 6 outer surfaces, etc. In this way, the user can sequentially display different outer surfaces of the home decoration model on the screen of the terminal used by the user, so as to perform fine adjustment and the like on the outer surface of the home decoration model.
Sometimes, the position of the home decoration model in the three-dimensional virtual space is constant, and the position of the viewpoint for observing the home decoration model is variable, that is, the position of the virtual camera that photographs the home decoration model, that is, the position of the viewpoint, is variable.
In this case, the user needs to manually change the position of the viewpoint on the electronic device, so that different outer surfaces of the home decoration model can be displayed at different viewpoints on a common interface, and the user can fine-tune the different outer surfaces of the home decoration model. Wherein, the outer surface of the home decoration model can be seen at the observation point because the virtual camera at the observation point shoots the outer surface of the home decoration model through the lens.
In order to assist a user to manually change the position of the viewpoint on the common interface, the electronic device may provide a top-down interface on the screen while providing the common interface on the screen, wherein the top-down interface displays the three-dimensional virtual space with a top-down viewing angle, and displays the home decoration model and the virtual camera in the three-dimensional virtual space with the top-down viewing angle, wherein in the top-down viewing angle, a lens of the virtual camera may point to the home decoration model, and a position of the virtual camera in the top-down interface in the three-dimensional virtual space corresponds to a position of the viewpoint in the common interface in the three-dimensional virtual space.
At the moment, the virtual camera can shoot a part of the surface of the home decoration model, so that the part of the surface of the home decoration model can be displayed in a common interface, the position of the virtual camera relative to the home decoration model can be visually seen in a top view interface, the lens of the virtual camera points to the part of the surface of the home decoration model, and a user can more conveniently control the lens of the virtual camera to point to the surface of the home decoration model to which the user needs to point by changing the position of the virtual camera in a three-dimensional virtual space in the top view interface.
In addition, as the user changes the position of the virtual camera in the three-dimensional virtual space by looking down the interface, the pointing direction of the lens of the virtual camera changes accordingly (the lens needs to be continuously pointed to the home decoration model), the position of the virtual camera (the position of the viewpoint) changes synchronously in the normal interface, and the surface of the home decoration model pointed to by the lens of the camera in the normal interface and the surface of the home decoration model pointed to by the lens of the camera in the looking down interface keep the same at the same time.
In one mode, on the top view interface, when a user needs to change the position of the virtual camera, the cursor of the mouse can be moved to the area where the virtual camera is located, then the left button of the mouse is clicked without being released, the mouse is moved so that the cursor moves on the top view interface, meanwhile, the virtual camera also moves on the top view interface along with the movement of the cursor, and the moving direction and the moving distance of the virtual camera and the cursor on the top view interface are the same.
Since the lens of the virtual camera needs to be continuously pointed to the home decoration model, in the process of changing the position of the virtual camera in the top view interface, the orientation of the lens of the virtual camera needs to be changed in real time according to the current position of the virtual camera, but after the orientation of the lens of the virtual camera is changed, a touch point (the position of the cursor on the top view interface) may not be located in the area where the virtual camera on the top view interface is located, that is, the touch point is located outside the area where the virtual camera on the top view interface is located.
For example, referring to fig. 1, fig. 1 includes: before the user changes the position of the virtual camera on the top-view interface, the virtual camera (the virtual camera on the left side in fig. 1) and the home decoration model are in a position relationship, wherein the touch point K is located on the lower right side of the area where the virtual camera is located. And, fig. 1 further includes: after the user changes the position of the virtual camera on the top view interface, the virtual camera (the virtual camera on the right side in fig. 1) and the home decoration model have a positional relationship, wherein the touch point K is located outside the area where the virtual camera is located.
However, once the touch point is located outside the area where the virtual camera is located, the virtual camera cannot be simultaneously driven to move on the top-view interface by continuing to move the touch point through the mouse, that is, the position of the virtual camera cannot be changed on the top-view interface by moving the touch point through the mouse.
In this way, after the touch point is located outside the area where the virtual camera is located, even if the user continues to change the position of the cursor in the top view interface through the mouse, the position of the virtual camera does not change with the change of the position of the cursor. The user needs to release the left mouse button, move the cursor to the area where the virtual camera is located again, then click the left mouse button, and move the cursor under the condition that the left mouse button is not released, so that the position of the virtual camera in the three-dimensional virtual space can be continuously changed.
However, the process of "loosening the left mouse button, moving the cursor to the region where the virtual camera is located again, and then clicking the left mouse button" brings inconvenience to the user in controlling the virtual camera on the overlooking interface, reduces the sustainability of controlling the virtual camera, increases the complexity of the user in operating the virtual camera, and further reduces the user experience.
Therefore, in order to improve the user experience, referring to fig. 2, a flowchart of steps of an information processing method of the present application is shown, where at least a three-dimensional virtual space is displayed through a screen of an electronic device, the three-dimensional virtual space includes a specific position and a virtual camera, and a lens of the virtual camera points to the specific position, and the three-dimensional virtual space in the present application may be a three-dimensional virtual space displayed in a top view angle, where the method may specifically include the following steps:
in step S101, before changing the position of the virtual camera in the three-dimensional virtual space, the amount of offset between the original position of the touch point on the screen and the original position of the virtual camera is acquired.
In this application, the step may be implemented by the following process, including:
1011. the method comprises the steps of obtaining original touch point screen coordinates of touch points in a screen coordinate system, and obtaining corresponding original touch point coordinates of the original touch point screen coordinates in a camera coordinate system.
In this application, during the process that the electronic device displays the three-dimensional virtual space on the screen and displays the specific position and the virtual camera in the three-dimensional virtual space, the user can control the electronic device to change the position of the virtual camera in the three-dimensional virtual space.
The user can change the position of the virtual camera in the three-dimensional virtual space by touching the screen or moving a cursor displayed on the screen by a mouse. As such, the touch point on the screen of the electronic device may include a cursor of the mouse, a touch point of the operating body on the screen, and the like.
In one example, the original position of the touch point on the screen of the electronic device may include coordinates of the touch point on the screen of the electronic device in a camera coordinate system, and the original position of the virtual camera may include coordinates of the virtual camera in the camera coordinate system.
The electronic device cannot directly obtain the coordinates of the touch point on the screen in the camera coordinate system, however, the electronic device records the screen coordinate system, so that the electronic device can obtain the coordinates of the touch point on the screen in the screen coordinate system, and then can obtain the coordinates of the touch point on the screen in the camera coordinate system by means of the coordinates of the touch point on the screen in the screen coordinate system.
Wherein the electronic device may create a camera coordinate system before changing the position of the virtual camera in the three-dimensional virtual space, and the electronic device may determine a conversion relationship between the camera coordinate system and the screen coordinate system in advance.
Therefore, when acquiring the original touch point coordinates of the touch point on the screen in the camera coordinate system, the original touch point screen coordinates of the touch point in the screen coordinate system can be acquired, wherein the original touch point screen coordinates of the touch point in the screen coordinate system can be sensed by the electronic device because the screen coordinate system is recorded in the electronic device.
Then, the electronic device can acquire original touch point coordinates of the touch point in the screen coordinate system, wherein the original touch point screen coordinates correspond to the original touch point coordinates in the camera coordinate system. The above process relates to the conversion between the coordinates in the screen coordinate system and the coordinates in the camera coordinate system, and the specific conversion manner may refer to a currently existing manner, and the application does not limit the specific conversion manner.
Referring to fig. 3, in the camera coordinate system, a central point of the virtual camera is an origin of the camera coordinate system, fig. 3 is a view of a specific position and the virtual camera in the three-dimensional virtual space from a top view angle, a plane where the screen is located includes an x axis and a z axis in the camera coordinate system, a direction pointing to the central point of the virtual camera from the specific position is a positive z-axis direction, the positive z-axis direction rotates 90 ° counterclockwise in the plane where the screen is located is a positive x-axis direction, the direction of the top view angle is a negative y-axis direction in the camera coordinate system, and a reverse direction of the top view direction is a positive y-axis direction in the camera coordinate system.
1012. Raw camera coordinates of the virtual camera in a camera coordinate system are acquired.
In this application, a camera coordinate system is recorded in the electronic device, and when the electronic device displays a virtual camera in a three-dimensional virtual space, the electronic device displays the virtual camera according to the camera coordinate system, so that the electronic device can sense coordinates of the virtual camera in the camera coordinate system in real time, that is, before changing a position of the virtual camera in the three-dimensional virtual space, the electronic device can sense original camera coordinates of the virtual camera in the camera coordinate system.
1013. And calculating the difference between the original touch point coordinates and the original camera coordinates to obtain the offset between the original position of the touch point on the screen and the original position of the virtual camera.
Since the present application is looking at a specific position in the three-dimensional virtual space and the virtual camera from the top view, the value of the y-axis may not be calculated, for example, the value of the y-axis may be ignored, or the value of the y-axis may be regarded as 0, etc.
Thus, the difference between the original touch point coordinates and the original camera coordinates can be calculated and used as the offset.
The difference between the original touch point coordinates and the original camera coordinates may include a difference between the original touch point coordinates and the original camera coordinates, and the like. For example, the difference between the value of the original touch point coordinate on the x-axis of the camera coordinate system and the value of the original camera coordinate on the x-axis of the camera coordinate system may be calculated to obtain the offset on the x-axis. And calculating a difference value between a value of the original touch point coordinate on the z-axis of the camera coordinate system and a value of the original camera coordinate on the z-axis of the camera coordinate system to obtain an offset on the z-axis. Then, an offset between the original position of the touch point on the screen of the electronic device and the original position of the virtual camera may be obtained according to the offset on the x-axis and the offset on the z-axis, for example, the offset on the x-axis and the offset on the z-axis are taken as the offset between the original position of the touch point on the screen and the original position of the virtual camera, and the like.
In one example, the original camera coordinates of the virtual camera in the camera coordinate system include coordinates of a center point of the virtual camera in the camera coordinate system, and since the center point of the virtual camera coincides with the origin of the camera coordinate system, the original camera coordinates have a value of 0 on the x-axis of the camera coordinate system, a value of 0 on the z-axis of the camera coordinate system, and so on.
The touch point on the screen of the electronic device may be at any position on the virtual camera, and therefore, the value of the original touch point coordinate on the x-axis of the camera coordinate system may be greater than 0, may be less than 0, and may also be equal to 0, and so on, and thus, the difference between the value of the original touch point coordinate on the x-axis of the camera coordinate system and the value of the original camera coordinate on the x-axis of the camera coordinate system may be greater than 0, may be less than 0, and may also be equal to 0, and so on. And the value of the original touch point coordinate on the z-axis of the camera coordinate system may be greater than 0, may be less than 0, may also be equal to 0, and the like, so that the difference between the value of the original touch point coordinate on the z-axis of the camera coordinate system and the value of the original camera coordinate on the z-axis of the camera coordinate system may be greater than 0, may be less than 0, may also be equal to 0, and the like.
In the schematic diagram shown in fig. 3, before changing the position of the virtual camera in the three-dimensional virtual space, the touch point is located at the lower right portion of the virtual camera, i.e., at the fourth quadrant in the camera coordinate system, at this time, the value of the original touch point coordinate on the x-axis of the camera coordinate system is greater than 0, and the value of the original touch point coordinate on the z-axis of the camera coordinate system is greater than 0, such that the difference between the value of the original touch point coordinate on the x-axis of the camera coordinate system and the value of the original camera coordinate on the x-axis of the camera coordinate system is greater than 0, and, a difference between a value of the original touch point coordinate on the z-axis of the camera coordinate system and a value of the original camera coordinate on the z-axis of the camera coordinate system is greater than 0, and thus, the offset between the original position of the touch point on the screen of the electronic device and the original position of the virtual camera comprises two values corresponding to the x axis and two values corresponding to the z axis, which are both greater than 0.
In step S102, in the case where the position of the virtual camera in the three-dimensional virtual space is changed, the current position of the touch point is acquired.
The user may control the electronic device to change the position of the virtual camera in the three-dimensional virtual space such that the lens of the virtual camera may be pointed at a particular location in different directions, thereby allowing the user to view the particular location at different angles.
For example, in a case where a three-dimensional virtual object is placed on a specific position in a three-dimensional virtual space, such as a sofa, a table, a bed, a window, a wardrobe, etc. in the field of home decoration, the user may control the electronic device to change the position of the virtual camera in the three-dimensional virtual space so that the lens of the virtual camera may be directed to different surfaces of the three-dimensional virtual object so that the user may view the different surfaces of the three-dimensional virtual object.
In the present application, the user can change the position of the virtual camera in the three-dimensional virtual space by means of touch on the screen or by means of mouse movement of a cursor displayed on the screen. In a case where the user controls the electronic device to change the position of the virtual camera in the three-dimensional virtual space, the electronic device may acquire a current position of a touch point on a screen of the electronic device, and the touch point may include a cursor of a mouse, a touch point of an operating body on the screen, and the like.
In the case of changing the position of the virtual camera in the three-dimensional virtual space, the current position of the touch point may include coordinates of the touch point in a world coordinate system.
Specifically, in this step, the current touch point screen coordinates of the touch point on the screen of the electronic device in the screen coordinate system may be acquired. And then, acquiring the world coordinate of the current touch point corresponding to the screen coordinate of the current touch point in the world coordinate system, and taking the world coordinate as the current position of the touch point on the screen of the electronic equipment. The above process relates to the conversion between the coordinates in the screen coordinate system and the coordinates in the world coordinate system, and the specific conversion manner may be referred to a currently existing manner, and the application does not limit the specific conversion manner.
In step S103, a target position of the virtual camera is acquired from the specific position, the current position, and the offset amount, and an angle change amount of an orientation of a lens of the virtual camera is acquired from the specific position, the current position, and the offset amount.
This step can be referred to the examples shown later and will not be described in detail here.
In step S104, the display state of the virtual camera is updated on the screen according to the target position and the amount of angle change.
In the present application, the display state of the virtual camera includes the position of the virtual camera in the top view interface, the orientation of the lens of the virtual camera, and the like.
In this step, the virtual camera may be moved on the screen according to the target position, and the orientation of the lens of the virtual camera is rotated by the angle change amount.
In one embodiment of the present application, the target position of the virtual camera includes target camera world coordinates of the virtual camera in a world coordinate system.
In this way, when the virtual camera is moved on the screen according to the target position, the target camera screen coordinates corresponding to the target camera world coordinates in the screen coordinate system can be acquired. The above process relates to the conversion between the coordinates in the world coordinate system and the coordinates in the screen coordinate system, and the specific conversion manner may refer to a currently existing manner, and the application does not limit the specific conversion manner. The virtual camera displayed on the screen may then be moved to the area where the target camera screen coordinates are located.
In the application, in the process that the user changes the position of the virtual camera in the three-dimensional virtual space, the corresponding position of the virtual camera in the three-dimensional virtual space in the screen coordinate system changes along with the change of the position of the touch point on the screen.
In the present application, before changing the position of the virtual camera in the three-dimensional virtual space, the offset between the original position of the touch point on the screen and the original position of the virtual camera is acquired. In the case of changing the position of the virtual camera in the three-dimensional virtual space, the current position of the touch point is acquired. The method includes the steps of obtaining a target position of the virtual camera according to the specific position, the current position and the offset, and obtaining an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position and the offset. And updating the display state of the virtual camera on the screen according to the target position and the angle variation.
Through the application, in the process of changing the position of the virtual camera in the three-dimensional virtual space, the relative distance and the relative direction between the position of the touch point displayed on the screen of the electronic equipment and the position of the virtual camera displayed on the screen of the electronic equipment are kept unchanged. For example, in which part of the virtual camera displayed on the screen the touch point displayed on the screen is located before the position of the virtual camera in the three-dimensional virtual space is changed, the touch point displayed on the screen is still located on the part of the virtual camera displayed on the screen and is not located outside the virtual camera displayed on the screen after the position of the virtual camera in the three-dimensional virtual space is changed. Therefore, the user can continuously control the virtual camera without readjusting the touch point after changing the position of the virtual camera in the three-dimensional virtual space, the user can conveniently control the virtual camera on the overlooking interface, for example, the position of the virtual camera in the three-dimensional virtual space is convenient to change, so that the control sustainability of the virtual camera can be improved, the operation complexity of the user on the virtual camera can be reduced, and the user experience can be improved.
In one embodiment of the present application, referring to fig. 4, the process of "acquiring an angle variation amount of an orientation of a lens of a virtual camera according to a specific position, a current position, and an offset" in step S103 includes:
in step S201, a first distance between the specific location and the current location is acquired.
The electronic equipment records a world coordinate system, and looks up a specific position and an angle of the virtual camera from a top down angle, in the world coordinate system, a plane where the screen is located includes an x axis and a z axis in the world coordinate system, the x axis of the world coordinate system can be horizontally right on the plane where the screen is located, the z axis of the world coordinate system can be vertically downward on the plane where the screen is located, the top down direction is a y axis negative direction, and a reverse direction of the top down direction is a y axis positive direction.
Since the present application is looking at a specific position in the three-dimensional virtual space and the virtual camera from the top view, the value of the y-axis may not be calculated in the world coordinate system, for example, the value of the y-axis may be ignored, or the value of the y-axis may be regarded as 0, etc.
In this way, in this step, a first square of an absolute value of a difference between a value of a specific coordinate of the specific position in the world coordinate system on the x-axis of the world coordinate system and a value of a current touch point world coordinate of the current position on the x-axis of the world coordinate system may be calculated, and a second square of an absolute value of a difference between a value of the specific coordinate of the specific position in the world coordinate system on the z-axis of the world coordinate system and a value of a current touch point world coordinate of the current position on the z-axis of the world coordinate system may be calculated, and then a sum between the first square and the second square may be calculated, and then an evolution of the sum may be calculated, resulting in a first distance between the specific position and the current position.
In step S202, a first vector is obtained with the specific position as a starting point and the target position of the virtual camera as an end point according to the current position, the specific position, the first distance and the offset.
This step can be realized by the following process, including:
2021. a unit direction vector of the first vector is obtained according to the specific position, the first distance, the current position and the offset, and the direction of the unit direction vector comprises a direction pointing from the specific position to the target position of the virtual camera.
This step can be realized by the following process, including:
11) and generating a second vector taking the specific position as a starting point and the current position as an end point according to the specific position and the current position.
In the present application, in the case of changing the position of the virtual camera in the three-dimensional virtual space, the position of the camera can be seen in the schematic diagram shown in fig. 5. Fig. 5 includes two virtual cameras, where the position of the left virtual camera is the position of the virtual camera before the position of the virtual camera in the three-dimensional virtual space is changed, and the position of the right virtual camera is the position of the virtual camera after the position of the virtual camera in the three-dimensional virtual space is changed. Where M is a touch point in the case of changing the position of the virtual camera in the three-dimensional virtual space.
In this application, a first difference between a value of a target touch point world coordinate on an x-axis of the world coordinate and a value of a specific position on the x-axis of the world coordinate may be calculated, a second difference between a value of the target touch point world coordinate on a z-axis of the world coordinate and a value of the specific position on the z-axis of the world coordinate may be calculated, and then a second vector may be generated according to the first difference and the second difference, where the first difference and the second difference may be used as a vector representation of the second vector, and the vector representation may be (the first difference, the second difference), so that the second vector may be obtained.
12) And acquiring an included angle between the first vector and the second vector according to the first distance and the offset.
In the present application, the offset between the origin position of the touch point on the screen of the electronic device and the origin position of the virtual camera includes an offset on the x-axis on the camera coordinate system and an offset on the z-axis on the camera coordinate system.
The unit length of each axis of the camera coordinate system is the same as that of each axis of the world coordinate system, so that the offset Offsetx of the original touch point coordinate and the original camera coordinate on the x axis of the camera coordinate system and the offset Offsetz of the original touch point coordinate and the original camera coordinate on the z axis of the camera coordinate system can be directly applied to the world coordinate system to participate in the operation.
Therefore, an offset corresponding to the included angle θ between the first vector and the second vector may be obtained from the offset on the x-axis and the offset on the z-axis of the camera coordinate system, in the example of fig. 5, the offset on the x-axis of the camera coordinate system, i.e., Offsetx, may be obtained, a ratio between the offset on the x-axis of the camera coordinate system and the first distance may be calculated, and then an arcsine function value of the ratio may be calculated, for example, the ratio may be input into an arcsine function to obtain an arcsine function value output by the arcsine function, where the arcsine function value is the included angle θ between the first vector and the second vector.
13) And acquiring a unit direction vector according to the second vector and the included angle.
In the present application, in the case where the vector representation of the second vector is known, the direction of the second vector may be determined from the vector representation of the second vector, and then the unit vector of the first vector may be obtained from the direction of the second direction and the angle θ.
In an embodiment of the present application, when an offset between an original position of a touch point on a screen of an electronic device and an original position of a virtual camera on an x-axis of a camera coordinate system is a positive number, an included angle θ between a first vector and a second vector is a positive number, and a direction of the first vector is: the second vector is the direction after the plane in which the x-axis and the z-axis of the world coordinate system lie is rotated clockwise by an angle theta.
In another embodiment of the present application, in a case that an offset between an original position of a touch point on a screen of an electronic device and an original position of a virtual camera on an x-axis of a camera coordinate system is a negative number, an included angle θ between a first vector and a second vector is a negative number, and a direction of the first vector is: the second vector is the direction after the plane in which the x-axis and the z-axis of the world coordinate system lie is rotated counterclockwise by an angle theta.
2022. And acquiring the length of the first vector according to the first distance and the offset.
A projection point of the world coordinate of the current touch point in the direction of the first vector may be determined, and a point P shown in fig. 5 is a projection point of the world coordinate of the current touch point in the direction of the first vector, so that the length of the first vector may be: the distance between the particular location and the proxel is subtracted by the distance between the target camera world coordinate of the virtual camera (the coordinate of the center point of the virtual camera in the world coordinate system) and the proxel.
In fig. 5, the current world coordinate M of the touch point, the specific position O, and the projection point P form a right triangle, and the projection point P is a point where the right angle of the right triangle is located. The angle between edge OM and edge OP is theta, and the angle between edge MP and edge OP is 90 deg. The length of the second vector is the length of the edge OM.
Therefore, when calculating the distance between the specific position and the projection point, the cosine value of the included angle θ between the edge OM and the edge OP may be calculated, and then the product of the length of the second vector (the length of the edge OM) and the cosine value of the included angle θ is calculated to obtain the distance between the specific position and the projection point, that is, the length of the edge OP.
In addition, the length of the side CP is an offset of the original touch point coordinate of the touch point in the camera coordinate system and the original camera coordinate of the virtual camera in the camera coordinate system on the z-axis in the camera coordinate system, and the offset of the original touch point coordinate of the touch point in the camera coordinate system and the original camera coordinate of the virtual camera in the camera coordinate system on the z-axis in the camera coordinate system can be directly applied to the world coordinate system for performing an operation. Therefore, the difference between the length of the edge OP and the above-described offset amount on the z-axis in the camera coordinate system can be calculated, and the distance between the target camera world coordinate of the virtual camera (the coordinate of the center point C of the virtual camera in the world coordinate system) and the projected point P can be obtained.
2023. And acquiring a first vector according to the unit direction vector and the length.
In the world coordinate system, since a specific position in the three-dimensional virtual space and the virtual camera are viewed from the top view, the value of the y-axis may be ignored, or the value on the y-axis may be regarded as 0 or the like.
Thus, the vector representation of the first vector may comprise (m, n), where m is the corresponding value on the x-axis in the world coordinate system and n is the corresponding value on the z-axis in the world coordinate system.
The ratio between n and m can be obtained by a unit direction vector of the first vector, for example, the ratio between n and m can be the direction of the unit direction vector of the first vector (the direction of the unit direction vector can be directly obtained by the unit direction vector), and the sum of the square of n and the square of m is equal to the square of the length of the first vector, so that the value of m and the value of n can be calculated by the constraint conditions, thereby obtaining a vector representation of the first vector, i.e., obtaining the first vector OC.
In step S203, an angle change amount of the orientation of the lens of the virtual camera is acquired from the first vector.
In one embodiment of the present application, a ratio between a value of a first dimension and a value of a second dimension in a first vector may be calculated. The arctan function value of this ratio is then calculated and used as the angle change amount.
In the present application, before changing the position of the virtual camera in the three-dimensional virtual space, a connection line between the central point of the virtual camera and the specific position is parallel to the z-axis of the world coordinate system, that is, the connection line between the central point of the virtual camera and the specific position is vertical, after changing the position of the virtual camera in the three-dimensional virtual space, the virtual camera is in a state close to the right side as shown in fig. 5, since the lens of the virtual camera needs to be continuously aligned with the specific position, the orientation of the lens of the virtual camera needs to be changed as the position of the virtual camera changes to the target camera world coordinate in the world coordinate system, and it is assumed that the angle change amount of the orientation of the lens of the virtual camera needs to be changed is β as shown in fig. 5.
Then, in calculating the angle change amount β, a line on which the specific position is located, which is parallel to the z-axis of the world coordinate system, may be determined, and then a projected point of the target camera world coordinate on the line, which is a projected point of the target camera world coordinate on the line shown in fig. 5, may be determined, so that, in the case where the target camera world coordinate and the specific position in the world coordinate system are known, a first difference between a value in the target camera world coordinate corresponding to the x-axis in the world coordinate system and a value in the specific coordinate corresponding to the x-axis in the world coordinate system may be calculated, a length of the line segment HM may be obtained, and a second difference between a value in the target camera world coordinate corresponding to the z-axis in the world coordinate system and a value in the specific coordinate corresponding to the z-axis in the world coordinate system may be calculated, a length of the line segment HO may be obtained, then, a ratio between the first difference and the second difference is calculated, and an arctan function value of the ratio is calculated, for example, the ratio is input to an arctan function, so as to obtain an arctan function value output by the arctan function, and the arctan function value is the angle change amount β.
In the example shown in fig. 5, the first difference value and the second difference value are both positive numbers, and therefore the ratio between the first difference value and the second difference value is a positive number, and thus the calculated angle change amount β is a positive number, and therefore, the counterclockwise rotation angle change amount β degrees can be set with the orientation of the lens of the virtual camera before changing the position of the virtual camera in the three-dimensional virtual space as a reference.
In another embodiment, one of the first difference value and the second difference value is a positive number and one is a negative number, and a ratio between the first difference value and the second difference value is a negative number, so that the calculated angle change amount β is a negative number, and thus, the angle change amount β may be changed by an angle β degrees clockwise with reference to the orientation of the lens of the virtual camera before changing the position of the virtual camera in the three-dimensional virtual space.
In another embodiment of the present application, based on the embodiment shown in fig. 4, in step S103, when the target position of the virtual camera is obtained according to the specific position, the current position, and the offset amount, the target position of the virtual camera may be obtained according to the specific position and the first vector.
For example, in the specific coordinate of the specific position in the world coordinate system and the first vector, the values corresponding to the same coordinate dimension are added to obtain the target camera world coordinate of the virtual camera in the world coordinate system, and the target camera world coordinate is used as the target position of the virtual camera.
Specifically, since it is a top view angle to view a specific position in the three-dimensional virtual space and the virtual camera, in the world coordinate system, the value of the y-axis may be ignored, or the value on the y-axis may be regarded as 0 or the like.
In this way, a first sum value between a value of a specific coordinate of the specific position in the world coordinate system on the x-axis of the world coordinate system and a value corresponding to the x-axis in the first vector can be calculated as a value of a target camera world coordinate of the virtual camera on the x-axis of the world coordinate system, and a second sum value between a value of the specific coordinate of the specific position in the world coordinate system on the z-axis of the world coordinate system and a value corresponding to the z-axis in the first vector can be calculated as a value of a target camera world coordinate of the virtual camera on the z-axis of the world coordinate system, thereby obtaining a target camera world coordinate of the virtual camera (a coordinate of the center point C of the virtual camera in the world coordinate system), which is, for example, (first sum value, second sum value) or the like.
It is noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary and that no action is necessarily required in this application.
Referring to fig. 6, a block diagram showing a configuration of an information processing apparatus of the present application, which displays at least a three-dimensional virtual space including a specific position and a virtual camera with a lens of the virtual camera pointing to the specific position through a screen of an electronic device, includes:
a first obtaining module 11, configured to obtain an offset between an original position of a touch point on the screen and an original position of the virtual camera before changing a position of the virtual camera in the three-dimensional virtual space;
a second obtaining module 12, configured to obtain a current position of the touch point when a position of the virtual camera in the three-dimensional virtual space is changed;
a third obtaining module 13, configured to obtain a target position of the virtual camera according to the specific position, the current position, and the offset, and a fourth obtaining module 14, configured to obtain an angle change amount of an orientation of a lens of the virtual camera according to the specific position, the current position, and the offset;
and the updating module 15 is configured to update the display state of the virtual camera on the screen according to the target position and the angle variation.
In an optional implementation manner, the first obtaining module includes:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring the original touch point screen coordinates of the touch points in a screen coordinate system;
a third acquisition unit, configured to acquire original camera coordinates of the virtual camera in a camera coordinate system;
and the calculating unit is used for calculating the difference between the original touch point coordinates and the original camera coordinates to obtain the offset.
In an optional implementation manner, the second obtaining module includes:
the fourth acquisition unit is used for acquiring the current touch point screen coordinate of the touch point in a screen coordinate system;
and the fifth acquisition unit is used for acquiring the world coordinate of the current touch point corresponding to the current touch point screen coordinate in a world coordinate system and taking the world coordinate as the current position of the touch point.
In an optional implementation manner, the fourth obtaining module includes:
a sixth acquiring unit configured to acquire a first distance between the specific position and the current position;
a seventh obtaining unit, configured to obtain, according to the current position, the specific position, the first distance, and the offset, a first vector that takes the specific position as a starting point and a target position of the virtual camera as an end point;
an eighth acquiring unit, configured to acquire an angle variation amount of an orientation of a lens of the virtual camera according to the first vector.
In an optional implementation manner, the eighth obtaining unit includes:
a first calculating subunit, configured to calculate a ratio between a value of a first dimension and a value of a second dimension in the first vector;
and the second calculating subunit is used for calculating an arc tangent function value of the ratio and taking the arc tangent function value as the angle variation.
In an optional implementation manner, the seventh obtaining unit includes:
a first obtaining subunit, configured to obtain a unit direction vector of the first vector according to the specific position, the first distance, the current position, and the offset, where a direction of the unit direction vector includes a direction from the specific position to a target position of the virtual camera;
a second obtaining subunit, configured to obtain a length of the first vector according to the first distance and the offset;
and the third acquisition subunit is used for acquiring the first vector according to the unit direction vector and the length.
In an optional implementation manner, the first obtaining subunit is specifically configured to include: generating a second vector taking the specific position as a starting point and the current position as an end point according to the specific position and the current position; acquiring an included angle between the first vector and the second vector according to the first distance and the offset; and acquiring the unit direction vector according to the second vector and the included angle.
In an optional implementation manner, the third obtaining module includes:
a ninth acquisition unit to acquire a target position of the virtual camera according to the specific position and the first vector.
In an optional implementation manner, the ninth obtaining unit is specifically configured to: and adding the numerical values corresponding to the same coordinate dimension in the specific coordinate of the specific position in the world coordinate system and the first vector to obtain the target camera world coordinate of the virtual camera in the world coordinate system, and taking the target camera world coordinate as the target position of the virtual camera.
In an optional implementation manner, the update module includes:
a moving unit for moving the virtual camera on the screen according to the target position, and a rotating unit for rotating the orientation of the lens of the virtual camera by the angle variation.
In one optional implementation, the target location includes target camera world coordinates of the virtual camera in a world coordinate system;
the mobile unit includes:
the fourth acquisition subunit is used for acquiring the corresponding target camera screen coordinates of the target camera world coordinates in a screen coordinate system;
and the moving subunit is used for moving the virtual camera displayed on the screen to the area where the screen coordinates of the target camera are located.
In the present application, before changing the position of the virtual camera in the three-dimensional virtual space, the offset between the original position of the touch point on the screen and the original position of the virtual camera is acquired. In the case of changing the position of the virtual camera in the three-dimensional virtual space, the current position of the touch point is acquired. The method includes the steps of obtaining a target position of the virtual camera according to the specific position, the current position and the offset, and obtaining an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position and the offset. And updating the display state of the virtual camera on the screen according to the target position and the angle variation.
Through the application, in the process of changing the position of the virtual camera in the three-dimensional virtual space, the relative distance and the relative direction between the position of the touch point displayed on the screen of the electronic equipment and the position of the virtual camera displayed on the screen of the electronic equipment are kept unchanged. For example, in which part of the virtual camera displayed on the screen the touch point displayed on the screen is located before the position of the virtual camera in the three-dimensional virtual space is changed, the touch point displayed on the screen is still located on the part of the virtual camera displayed on the screen and is not located outside the virtual camera displayed on the screen after the position of the virtual camera in the three-dimensional virtual space is changed. Therefore, the user can continuously control the virtual camera without readjusting the touch point after changing the position of the virtual camera in the three-dimensional virtual space, the user can conveniently control the virtual camera on the overlooking interface, for example, the position of the virtual camera in the three-dimensional virtual space is convenient to change, so that the control sustainability of the virtual camera can be improved, the operation complexity of the user on the virtual camera can be reduced, and the user experience can be improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Fig. 7 is a block diagram of an electronic device 800 shown in the present application. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, images, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast operation information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a block diagram of an electronic device 1900 shown in the present application. For example, the electronic device 1900 may be provided as a server.
Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The information processing method and apparatus provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (24)

1. An information processing method for displaying at least a three-dimensional virtual space including a specific position and a virtual camera with a lens of the virtual camera directed to the specific position through a screen of an electronic device, the method comprising:
acquiring an offset between an original position of a touch point on the screen and an original position of the virtual camera before changing a position of the virtual camera in the three-dimensional virtual space;
acquiring the current position of the touch point under the condition of changing the position of the virtual camera in the three-dimensional virtual space;
acquiring a target position of the virtual camera according to the specific position, the current position and the offset, and acquiring an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position and the offset;
and updating the display state of the virtual camera on the screen according to the target position and the angle variation.
2. The method of claim 1, wherein the obtaining an offset between an original position of a touch point on the screen and an original position of the virtual camera comprises:
acquiring an original touch point screen coordinate of the touch point in a screen coordinate system, and acquiring an original touch point coordinate corresponding to the original touch point screen coordinate in a camera coordinate system;
acquiring original camera coordinates of the virtual camera in a camera coordinate system;
and calculating the difference between the original touch point coordinates and the original camera coordinates to obtain the offset.
3. The method of claim 1, wherein the obtaining the current position of the touch point comprises:
acquiring the current touch point screen coordinate of the touch point in a screen coordinate system;
and acquiring the world coordinate of the current touch point corresponding to the screen coordinate of the current touch point in a world coordinate system, and taking the world coordinate as the current position of the touch point.
4. The method of claim 1, wherein the obtaining an angular change amount of an orientation of a lens of the virtual camera according to the specific position, the current position, and the offset comprises:
acquiring a first distance between the specific position and the current position;
acquiring a first vector taking the specific position as a starting point and the target position of the virtual camera as an end point according to the current position, the specific position, the first distance and the offset;
and acquiring the angle variation of the orientation of the lens of the virtual camera according to the first vector.
5. The method of claim 4, wherein the obtaining the angle change amount of the orientation of the lens of the virtual camera according to the first vector comprises:
calculating a ratio between a value of a first dimension and a value of a second dimension in the first vector;
and calculating an arc tangent function value of the ratio as the angle variation.
6. The method of claim 4, wherein obtaining a first vector with the specific position as a starting point and the target position of the virtual camera as an ending point according to the current position, the specific position, the first distance, and the offset comprises:
acquiring a unit direction vector of the first vector according to the specific position, the first distance, the current position and the offset, wherein the direction of the unit direction vector comprises a direction pointing from the specific position to a target position of the virtual camera;
acquiring the length of the first vector according to the first distance and the offset;
and acquiring the first vector according to the unit direction vector and the length.
7. The method of claim 6, wherein obtaining a unit direction vector of the first vector according to the specific location, the first distance, the current location, and the offset comprises:
generating a second vector taking the specific position as a starting point and the current position as an end point according to the specific position and the current position;
acquiring an included angle between the first vector and the second vector according to the first distance and the offset;
and acquiring the unit direction vector according to the second vector and the included angle.
8. The method of claim 4, wherein the obtaining the target position of the virtual camera according to the specific position, the current position, and the offset comprises:
acquiring a target position of the virtual camera according to the specific position and the first vector.
9. The method of claim 8, wherein the target position of the virtual camera according to the particular position and the first vector comprises:
and adding the numerical values corresponding to the same coordinate dimension in the specific coordinate of the specific position in the world coordinate system and the first vector to obtain the target camera world coordinate of the virtual camera in the world coordinate system, and taking the target camera world coordinate as the target position of the virtual camera.
10. The method according to claim 1, wherein the updating the display state of the virtual camera on the screen according to the target position and the angle change amount comprises:
moving the virtual camera on the screen according to the target position, and rotating an orientation of a lens of the virtual camera by the angle change amount.
11. The method of claim 10, wherein the target location comprises target camera world coordinates of the virtual camera in a world coordinate system;
the moving the virtual camera on the screen according to the target position includes:
acquiring a target camera screen coordinate corresponding to the target camera world coordinate in a screen coordinate system;
and moving the virtual camera displayed on the screen to the area where the screen coordinates of the target camera are located.
12. An information processing apparatus that displays at least a three-dimensional virtual space including a specific position and a virtual camera whose lens is directed to the specific position through a screen of an electronic device, the apparatus comprising:
a first acquisition module for acquiring an offset between an original position of a touch point on the screen and an original position of the virtual camera before changing a position of the virtual camera in the three-dimensional virtual space;
a second obtaining module, configured to obtain a current position of the touch point when a position of the virtual camera in the three-dimensional virtual space is changed;
a third obtaining module, configured to obtain a target position of the virtual camera according to the specific position, the current position, and the offset, and a fourth obtaining module, configured to obtain an angle variation of an orientation of a lens of the virtual camera according to the specific position, the current position, and the offset;
and the updating module is used for updating the display state of the virtual camera on the screen according to the target position and the angle variation.
13. The apparatus of claim 12, wherein the first obtaining module comprises:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring the original touch point screen coordinates of the touch points in a screen coordinate system;
a third acquisition unit, configured to acquire original camera coordinates of the virtual camera in a camera coordinate system;
and the calculating unit is used for calculating the difference between the original touch point coordinates and the original camera coordinates to obtain the offset.
14. The apparatus of claim 12, wherein the second obtaining module comprises:
the fourth acquisition unit is used for acquiring the current touch point screen coordinate of the touch point in a screen coordinate system;
and the fifth acquisition unit is used for acquiring the world coordinate of the current touch point corresponding to the current touch point screen coordinate in a world coordinate system and taking the world coordinate as the current position of the touch point.
15. The apparatus of claim 12, wherein the fourth obtaining module comprises:
a sixth acquiring unit configured to acquire a first distance between the specific position and the current position;
a seventh obtaining unit, configured to obtain, according to the current position, the specific position, the first distance, and the offset, a first vector that takes the specific position as a starting point and a target position of the virtual camera as an end point;
an eighth acquiring unit, configured to acquire an angle variation amount of an orientation of a lens of the virtual camera according to the first vector.
16. The apparatus of claim 15, wherein the eighth obtaining unit comprises:
a first calculating subunit, configured to calculate a ratio between a value of a first dimension and a value of a second dimension in the first vector;
and the second calculating subunit is used for calculating an arc tangent function value of the ratio and taking the arc tangent function value as the angle variation.
17. The apparatus of claim 15, wherein the seventh obtaining unit comprises:
a first obtaining subunit, configured to obtain a unit direction vector of the first vector according to the specific position, the first distance, the current position, and the offset, where a direction of the unit direction vector includes a direction from the specific position to a target position of the virtual camera;
a second obtaining subunit, configured to obtain a length of the first vector according to the first distance and the offset;
and the third acquisition subunit is used for acquiring the first vector according to the unit direction vector and the length.
18. The apparatus of claim 17, wherein the first obtaining subunit is specifically configured to include: generating a second vector taking the specific position as a starting point and the current position as an end point according to the specific position and the current position; acquiring an included angle between the first vector and the second vector according to the first distance and the offset; and acquiring the unit direction vector according to the second vector and the included angle.
19. The apparatus of claim 15, wherein the third obtaining module comprises:
a ninth acquisition unit to acquire a target position of the virtual camera according to the specific position and the first vector.
20. The apparatus according to claim 19, wherein the ninth obtaining unit is specifically configured to: and adding the numerical values corresponding to the same coordinate dimension in the specific coordinate of the specific position in the world coordinate system and the first vector to obtain the target camera world coordinate of the virtual camera in the world coordinate system, and taking the target camera world coordinate as the target position of the virtual camera.
21. The apparatus of claim 12, wherein the update module comprises:
a moving unit for moving the virtual camera on the screen according to the target position, and a rotating unit for rotating the orientation of the lens of the virtual camera by the angle variation.
22. The apparatus of claim 21, wherein the target location comprises target camera world coordinates of the virtual camera in a world coordinate system;
the mobile unit includes:
the fourth acquisition subunit is used for acquiring the corresponding target camera screen coordinates of the target camera world coordinates in a screen coordinate system;
and the moving subunit is used for moving the virtual camera displayed on the screen to the area where the screen coordinates of the target camera are located.
23. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the information processing method of any one of claims 1 to 11.
24. A non-transitory computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the information processing method of any one of claims 1 to 11.
CN202011447733.1A 2020-12-11 2020-12-11 Information processing method and device Active CN112465901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011447733.1A CN112465901B (en) 2020-12-11 2020-12-11 Information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011447733.1A CN112465901B (en) 2020-12-11 2020-12-11 Information processing method and device

Publications (2)

Publication Number Publication Date
CN112465901A CN112465901A (en) 2021-03-09
CN112465901B true CN112465901B (en) 2022-03-08

Family

ID=74801883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011447733.1A Active CN112465901B (en) 2020-12-11 2020-12-11 Information processing method and device

Country Status (1)

Country Link
CN (1) CN112465901B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699249A (en) * 2015-03-27 2015-06-10 联想(北京)有限公司 Information processing method and electronic equipment
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium
CN111135556A (en) * 2019-12-31 2020-05-12 网易(杭州)网络有限公司 Virtual camera control method and device, electronic equipment and storage medium
CN111968246A (en) * 2020-07-07 2020-11-20 北京城市网邻信息技术有限公司 Scene switching method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10146331B2 (en) * 2014-11-28 2018-12-04 Ricoh Company, Ltd. Information processing system for transforming coordinates of a position designated by a pointer in a virtual image to world coordinates, information processing apparatus, and method of transforming coordinates
CN106201207B (en) * 2016-07-13 2019-12-03 上海乐相科技有限公司 A kind of virtual reality exchange method and device
US10567649B2 (en) * 2017-07-31 2020-02-18 Facebook, Inc. Parallax viewer system for 3D content
CN110908508B (en) * 2019-11-04 2021-12-03 广东虚拟现实科技有限公司 Control method of virtual picture, terminal device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699249A (en) * 2015-03-27 2015-06-10 联想(北京)有限公司 Information processing method and electronic equipment
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium
CN111135556A (en) * 2019-12-31 2020-05-12 网易(杭州)网络有限公司 Virtual camera control method and device, electronic equipment and storage medium
CN111968246A (en) * 2020-07-07 2020-11-20 北京城市网邻信息技术有限公司 Scene switching method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays;Jens Grubert et al;《IEEE Transactions on Visualization and Computer Graphics》;20171228;第24卷(第9期);55-61页 *
视频图像中监控目标的空间定位方法;唐丽玉等;《福州大学学报》;20140228;第42卷(第1期);2649-2662页 *

Also Published As

Publication number Publication date
CN112465901A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
KR102194094B1 (en) Synthesis method, apparatus, program and recording medium of virtual and real objects
EP3540571B1 (en) Method and device for editing virtual scene, and non-transitory computer-readable storage medium
EP3173970A1 (en) Image processing method and apparatus
EP2927787B1 (en) Method and device for displaying picture
CN108038726B (en) Article display method and device
CN106791893A (en) Net cast method and device
CN106775525A (en) Control the method and device of projecting apparatus
EP3641295B1 (en) Shooting interface switching method and apparatus, and device and storage medium thereof
CN114170302A (en) Camera external parameter calibration method and device, electronic equipment and storage medium
CN110782532B (en) Image generation method, image generation device, electronic device, and storage medium
CN110751707B (en) Animation display method, animation display device, electronic equipment and storage medium
CN110597443B (en) Calendar display method, device and medium
CN112465901B (en) Information processing method and device
CN111373730B (en) Panoramic shooting method and terminal
CN109407942B (en) Model processing method and device, control client and storage medium
CN111428654B (en) Iris recognition method, iris recognition device and storage medium
CN109754452B (en) Image rendering processing method and device, electronic equipment and storage medium
CN114296587A (en) Cursor control method and device, electronic equipment and storage medium
US9619016B2 (en) Method and device for displaying wallpaper image on screen
CN112363652B (en) Information processing method and device
CN107564038B (en) Offset parameter determination method and device and offset control method and device
CN106598217B (en) Display method, display device and electronic equipment
CN112232899A (en) Data processing method and device
CN112860827B (en) Inter-device interaction control method, inter-device interaction control device and storage medium
CN112596840A (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant