CN113238656B - Three-dimensional image display method and device, electronic equipment and storage medium - Google Patents

Three-dimensional image display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113238656B
CN113238656B CN202110571138.7A CN202110571138A CN113238656B CN 113238656 B CN113238656 B CN 113238656B CN 202110571138 A CN202110571138 A CN 202110571138A CN 113238656 B CN113238656 B CN 113238656B
Authority
CN
China
Prior art keywords
user
current user
virtual camera
position coordinates
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110571138.7A
Other languages
Chinese (zh)
Other versions
CN113238656A (en
Inventor
王骥超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110571138.7A priority Critical patent/CN113238656B/en
Publication of CN113238656A publication Critical patent/CN113238656A/en
Application granted granted Critical
Publication of CN113238656B publication Critical patent/CN113238656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a three-dimensional image display method, a three-dimensional image display device, an electronic device and a storage medium, wherein the three-dimensional image display method comprises the following steps: capturing a user observation visual angle in real time through a front-end optical capturing device of the electronic device; the user observation visual angle is the visual angle of a three-dimensional object displayed on a screen of the electronic equipment observed by a current user; controlling the virtual camera to move according to the user observation visual angle to obtain a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after movement is the same as the user observation visual angle; and displaying a target three-dimensional image obtained by shooting the three-dimensional object through the virtual camera after moving. By adopting the method and the device, the cost for displaying the three-dimensional image can be reduced, and the operation of watching the three-dimensional image by a user is flexible and simple.

Description

Three-dimensional image display method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of three-dimensional rendering, and in particular relates to a three-dimensional image display method, a three-dimensional image display device, electronic equipment and a storage medium.
Background
The human brain can feel that the front and back space of the three-dimensional object feel the different angles and different shielding relations of the scene seen by the eyes.
In the related art, when a user needs to observe a three-dimensional object in a three-dimensional rendering technology through an electronic device, the user is often required to connect the electronic device to a complicated and expensive head-mounted device, and the user's observation angle is tracked by adding an additional tracking device to the user's head.
Disclosure of Invention
The disclosure provides a three-dimensional image display method, a three-dimensional image display device, an electronic device and a storage medium, so as to at least solve the problem of high three-dimensional object display cost in the related art. The technical scheme of the present disclosure is as follows:
According to a first aspect of embodiments of the present disclosure, there is provided a method for displaying a three-dimensional image, the method including:
Capturing a user observation visual angle in real time through a front-end optical capturing device of the electronic device; the user observation visual angle is the visual angle of a three-dimensional object displayed on a screen of the electronic equipment observed by a current user;
Controlling the virtual camera to move according to the user observation visual angle to obtain a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after movement is the same as the user observation visual angle;
and displaying a target three-dimensional image obtained by shooting the three-dimensional object through the virtual camera after moving.
In one possible implementation manner, the capturing, in real time, the user viewing angle through the front-end optical capturing device of the electronic device includes:
Tracking, in real time, viewing pose information of the current user through the front-end optical capturing device;
and determining a user observation visual angle of the current user for observing the three-dimensional object based on the observation gesture information.
In one possible implementation, the viewing pose information includes user position coordinates of the current user relative to the front-end optical capture device, and the determining, based on the viewing pose information, a user viewing perspective of the current user viewing the three-dimensional object includes:
acquiring screen position coordinates of the screen relative to the front-end optical capturing device;
generating an observation angle vector of the current user based on the difference between the user position coordinates and the screen position coordinates; the viewing angle vector is used to characterize a user viewing angle at which the current user views the three-dimensional object.
In one possible implementation manner, the tracking, by the front-end optical capturing device, viewing gesture information of the current user in real time includes:
Acquiring a user image of the current user in real time through the front-end optical capturing device;
Determining screen space coordinates of the current user based on the position information of the current user in the user image;
And acquiring the distance between the current user and the screen, and determining the user position coordinate of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinate.
In one possible implementation, the determining the user position coordinates of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinates includes:
Obtaining a perspective matrix of the front-end optical capturing device;
According to the distance, performing inverse matrix operation on the perspective matrix to obtain an inverse perspective matrix of the front-end optical capturing device;
And multiplying the screen space coordinate by the inverse perspective matrix to obtain the user position coordinate.
In one possible implementation manner, the controlling the virtual camera to move according to the user viewing angle, to obtain a moved virtual camera, includes:
Acquiring an original shooting angle vector of the virtual camera; the original shooting angle vector is used for representing the visual angle of the three-dimensional object when the virtual camera is positioned at the original position coordinate;
Determining target position coordinates of the virtual camera based on a difference between the original shooting angle vector and the observation angle vector;
And controlling the virtual camera to move to the target position coordinate to obtain the moved virtual camera.
In one possible implementation, the determining the target position coordinates of the virtual camera based on the difference between the original shooting angle vector and the observation angle vector includes:
Determining a camera transformation matrix of the virtual camera according to the difference between the original shooting angle vector and the observation angle vector; the camera transformation matrix is used for representing movement information of the virtual camera;
and multiplying the original position coordinate by the camera transformation matrix to obtain the target position coordinate.
In one possible implementation manner, the tracking, by the front-end optical capturing device, viewing gesture information of the current user in real time includes:
Acquiring user depth information of the current user in real time through the front-end optical capturing device; the user depth information comprises three-dimensional position coordinates of the current user;
user position coordinates of the current user relative to the pre-optical capture device are determined based on a difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device.
According to a second aspect of embodiments of the present disclosure, there is provided a display device of a three-dimensional image, the device including:
A capturing unit configured to perform capturing a user viewing angle in real time through a front-end optical capturing device of the electronic device; the user observation visual angle is the visual angle of a three-dimensional object displayed on a screen of the electronic equipment observed by a current user;
a control unit configured to perform control of movement of the virtual camera according to the user viewing angle, resulting in a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after movement is the same as the user observation visual angle;
And a display unit configured to perform display of a target three-dimensional image obtained by photographing the three-dimensional object by the post-movement virtual camera.
In one possible implementation, the capturing unit is specifically configured to perform real-time tracking of the viewing pose information of the current user by the front-end optical capturing device; and determining a user observation visual angle of the current user for observing the three-dimensional object based on the observation gesture information.
In one possible implementation, the viewing pose information includes user position coordinates of the current user relative to the front-facing optical capturing device, the capturing unit being specifically configured to perform acquiring screen position coordinates of the screen relative to the front-facing optical capturing device; generating an observation angle vector of the current user based on the difference between the user position coordinates and the screen position coordinates; the viewing angle vector is used to characterize a user viewing angle at which the current user views the three-dimensional object.
In one possible implementation, the capturing unit is specifically configured to perform acquiring, by the front-end optical capturing device, the user image of the current user in real time; determining screen space coordinates of the current user based on the position information of the current user in the user image; and acquiring the distance between the current user and the screen, and determining the user position coordinate of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinate.
In one possible implementation, the capturing unit is specifically configured to perform acquiring a perspective matrix of the front-end optical capturing device; according to the distance, performing inverse matrix operation on the perspective matrix to obtain an inverse perspective matrix of the front-end optical capturing device; and multiplying the screen space coordinate by the inverse perspective matrix to obtain the user position coordinate.
In one possible implementation manner, the control unit is specifically configured to perform obtaining an original shooting angle vector of the virtual camera; the original shooting angle vector is used for representing the visual angle of the three-dimensional object when the virtual camera is positioned at the original position coordinate; determining target position coordinates of the virtual camera based on a difference between the original shooting angle vector and the observation angle vector; and controlling the virtual camera to move to the target position coordinate to obtain the moved virtual camera.
In one possible implementation, the control unit is specifically configured to perform determining a camera transformation matrix of the virtual camera according to a difference between the original shooting angle vector and the observation angle vector; the camera transformation matrix is used for representing movement information of the virtual camera; and multiplying the original position coordinate by the camera transformation matrix to obtain the target position coordinate.
In one possible implementation, the capturing unit is specifically configured to obtain, in real time, user depth information of the current user through the front-end optical capturing device; the user depth information comprises three-dimensional position coordinates of the current user; user position coordinates of the current user relative to the pre-optical capture device are determined based on a difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory storing a computer program and a processor implementing a method of displaying a three-dimensional image according to the first aspect or any one of the possible implementations of the first aspect when the computer program is executed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of displaying a three-dimensional image according to the first aspect or any one of the possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the method of displaying a three-dimensional image as described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the electronic equipment is used for displaying a picture obtained by shooting a three-dimensional object by the virtual camera on a screen, and capturing the user observation view angle of the current user for observing the three-dimensional object in real time by calling a front-end optical capturing device of the electronic equipment; according to the observation visual angle of the user, the virtual camera is controlled to move, so that a screen of the electronic equipment displays a target three-dimensional image obtained by shooting a three-dimensional object by the virtual camera through a target camera visual angle which is the same as the observation visual angle of the user; therefore, the electronic equipment can interact with the current user on the premise that the electronic equipment does not need to collect the watching gesture of the user through the head-mounted equipment, and the display angle of the three-dimensional object is adjusted in real time according to the observing angle of the current user, so that the sense and perspective angles of the virtual object in the mobile phone screen observed by the current user and the real world object observed by the naked eye are similar, the naked eye three-dimensional space sense is formed, and the display cost of the three-dimensional object is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is an application environment diagram illustrating a display method of a three-dimensional image according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method of displaying a three-dimensional image according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating another three-dimensional image display method according to an exemplary embodiment.
Fig. 4 is a block diagram of a display device of a three-dimensional image according to an exemplary embodiment.
Fig. 5 is an internal structural diagram of an electronic device, which is shown according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure.
The three-dimensional image display method provided by the disclosure can be applied to an application environment as shown in fig. 1. The screen of the electronic device 110 is used for displaying a picture obtained by shooting a three-dimensional object by a virtual camera. The electronic device captures a user observation visual angle of a current user for observing the three-dimensional object in real time through a front optical capturing device of the electronic device; the electronic equipment controls the virtual camera to move according to the observation visual angle of the user and displays a target three-dimensional image; the target three-dimensional image comprises a picture obtained by shooting a three-dimensional object by a virtual camera through a target camera visual angle matched with or the same as a user visual angle. In practice, the electronic device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
Fig. 2 is a flowchart illustrating a three-dimensional image display method according to an exemplary embodiment, and as shown in fig. 2, the three-dimensional image display method is applied to an electronic device, and a screen of the electronic device is used for displaying a picture obtained by photographing a three-dimensional object with a virtual camera, and the method includes:
in step S210, a user viewing angle at which a current user views a three-dimensional object displayed on a screen of the electronic device is captured in real time by a front-end optical capturing device of the electronic device.
The front-facing optical capturing device may refer to an optical capturing device for capturing optical information on a front side of the electronic device. In practical applications, the optical capturing device may be a normal camera (color camera), an infrared camera, a depth camera (3D camera).
The user viewing angle may refer to an angle of view at which a current user views a three-dimensional object in a screen of the electronic device.
Wherein a three-dimensional object may refer to an object in a virtual environment. The object may be a virtual character, a virtual animal, a cartoon character, etc., such as: characters, animals, plants, oil barrels, walls, stones, etc. displayed in virtual environments
The virtual environment may refer to a virtual environment that is displayed on a screen of the electronic device when the target application program runs on the electronic device. The virtual environment may be a simulation environment for the real world, a semi-simulation and semi-imaginary environment, or a pure imaginary environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment, but are not limited thereto.
Wherein the virtual camera may refer to an analog video camera for photographing the three-dimensional object in the target application. In practical application, when the electronic device is running, the target application program displays a picture obtained by shooting the three-dimensional object by the virtual camera on the screen of the electronic device. Specifically, the electronic device may render the three-dimensional object based on a positional relationship between the virtual camera and the three-dimensional object, and generate a target three-dimensional image including the three-dimensional object.
In the specific implementation, in the process of displaying a picture obtained by shooting a three-dimensional object by a virtual camera, the electronic device can call a front-end optical capturing device of the electronic device to capture the view angle of the three-dimensional object in the screen of the electronic device for a current user in real time. Specifically, the front-facing optical capturing device of the electronic device may capture, in real-time, viewing pose information of a current user, such as: at least one of current user motion, eye motion, head pose, head position. Then, the electronic device determines a viewing angle of the three-dimensional object in the screen of the current viewing electronic device based on the viewing pose information.
In step S220, the virtual camera is controlled to move according to the viewing angle of the user, so as to obtain a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after moving is the same as the visual angle observed by the user.
In a specific implementation, after capturing a user observation view angle of a current user in real time, the electronic device moves the virtual camera based on the user observation view angle, so that a target camera view angle of a three-dimensional object shot by the virtual camera is the same as the user observation view angle.
In step S230, a target three-dimensional image obtained by photographing a three-dimensional object by the virtual camera after movement is displayed.
The target three-dimensional image comprises a picture obtained by shooting a three-dimensional object by a virtual camera through a target camera view angle which is the same as a user observation view angle.
In the specific implementation, after the electronic device controls the virtual camera to move, the three-dimensional object is rendered based on the position relationship between the moved virtual camera and the three-dimensional object in the virtual environment, and a target three-dimensional image comprising the three-dimensional object is generated.
The display method of the three-dimensional image is applied to the electronic equipment, the screen of the electronic equipment is used for displaying a picture obtained by shooting the three-dimensional object by the virtual camera, and the front-end optical capturing equipment of the electronic equipment is called to capture the user observation visual angle of the current user for observing the three-dimensional object in real time; according to the user observation visual angle, the virtual camera is controlled to move, so that a screen of the electronic equipment displays a target three-dimensional image obtained by shooting a three-dimensional object by the virtual camera through a target camera visual angle matched with or the same as the user observation visual angle; therefore, on the premise that the electronic equipment does not need to collect the watching gesture of the user through the head-mounted equipment, the electronic equipment can interact with the current user, and the display angle of the three-dimensional object is adjusted in real time according to the observing angle of the current user, so that the sense and perspective angles of the virtual object and the real-world object of the current user in the naked eye watching mobile phone screen are similar, the naked eye three-dimensional space sense is formed, and the display cost of the three-dimensional object is reduced.
In an exemplary embodiment, capturing, in real-time, a user viewing perspective through a front-end optical capture device of an electronic device, comprises: tracking viewing pose information of a current user in real time through a front-end optical capturing device; based on the viewing pose information, a user viewing perspective at which the current user views the three-dimensional object is determined.
Wherein the viewing pose information includes at least one of current user motion, eye motion, head pose, head position.
In a specific implementation, in a process of capturing, in real time, a user viewing angle of a current user viewing a three-dimensional object through a front-end optical capturing device of an electronic device, the electronic device tracks, in real time, viewing gesture information of the current user through the front-end optical capturing device, for example: at least one of current user motion, eye motion, head pose, head position. Then, the electronic device determines a user viewing angle at which the current user views the three-dimensional object based on the viewing pose information. Taking the viewing gesture information as an example of eyeball movement, the electronic device can determine the relative position of the current user and the display screen of the electronic device based on the eyeball movement of the current user tracked in real time, and the screen content at which the current user is looking, so as to determine the viewing angle of the current user for viewing the three-dimensional object.
According to the technical scheme, the front-end optical capturing device of the electronic device is utilized to track the viewing gesture information of the current user in real time, and the viewing gesture information of the current user is utilized to capture the viewing angle of the current user for viewing the three-dimensional object; therefore, the user observation visual angle of the three-dimensional object observed by the current user is accurately tracked by taking the observation gesture of the current user as the tracking basis.
In an exemplary embodiment, the viewing pose information includes user position coordinates of the current user relative to the front-facing optical capture device, and determining a user viewing perspective of the current user to view the three-dimensional object based on the viewing pose information includes: acquiring screen position coordinates of a screen relative to a front-end optical capturing device; based on the difference between the user position coordinates and the screen position coordinates, an observation angle vector of the current user is generated.
The user position coordinate may refer to a position coordinate of a certain key point in the current user relative to the front-end optical capturing device.
Wherein, the viewing angle vector may refer to a vector that a user points to a screen of the electronic device; the viewing angle vector is used to characterize the user viewing angle at which the current user views the three-dimensional object.
In a specific implementation, if the viewing gesture information includes a user position coordinate of the current user relative to the front-end optical capturing device, the electronic device may acquire a screen position coordinate of the screen relative to the front-end optical capturing device in a process of determining a user viewing angle of the current user for viewing the three-dimensional object based on the viewing gesture information; based on the difference between the user position coordinates and the screen position coordinates, an observation angle vector V is generated that characterizes the user's observation angle of view of the current user observing the three-dimensional object.
In practical applications, the observation angle vector V may be expressed as v= (UserPos-PhonePos); wherein UserPos is the user position coordinates; phonePos are screen position coordinates.
According to the technical scheme, if the viewing gesture information comprises the user position coordinates of the current user relative to the front-end optical capturing device, the electronic device obtains the screen position coordinates of the screen relative to the front-end optical capturing device and generates the viewing angle vector of the current user based on the difference between the user position coordinates and the screen position coordinates in the process of determining the user viewing angle of the current user for viewing the three-dimensional object based on the viewing gesture information, and the subsequent electronic device controls the virtual camera to move based on the viewing angle vector, so that the user viewing angle of the current user for viewing the three-dimensional object can be effectively represented by the viewing angle vector, and the data processing amount of the electronic device for controlling the virtual camera to move based on the viewing angle vector is reduced.
In an exemplary embodiment, if the front-end optical capturing device is a front-end color camera or a front-end infrared camera, tracking, in real time, viewing pose information of a current user through the front-end optical capturing device includes: acquiring a user image of a current user in real time through a front-end optical capturing device; determining screen space coordinates of the current user based on the position information of the current user in the user image; and acquiring the distance between the current user and the screen, and determining the user position coordinate of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinate.
Wherein, obtaining a perspective matrix of the front-end optical capturing device; according to the distance, performing inverse matrix operation on the perspective matrix to obtain an inverse perspective matrix of the front-end optical capturing device; and multiplying the screen space coordinate by the inverse perspective matrix to obtain the user position coordinate.
In a specific implementation, if the front-end optical capturing device is a front-end color camera or a front-end infrared camera, the electronic device can shoot the current user in real time through the front-end color camera or the front-end infrared camera in the process of tracking the viewing posture information of the current user in real time through the front-end optical capturing device, so that the electronic device can acquire the user image of the current user in real time. After the electronic device acquires the user image of the current user, the electronic device determines screen space coordinates ScreenPos of the current user based on the position information of the preset key points of the current user in the user image; then, the electronic device obtains the distance UserDist between the current user and the screen from the device data, and determines the user position coordinates UserPos of the current user relative to the front-end optical capturing device according to the distance UserDist and the screen space coordinates ScreenPos. The preset key points may be head key points, face key points, and the like.
In the process of determining the user position coordinate UserPos of the current user relative to the front-end optical capturing device according to the distance UserDist and the screen space coordinate ScreenPos, the electronic device can acquire the perspective matrix VPmat of the front-end optical capturing device from the device data; and performing inverse matrix operation on the perspective matrix VPmat according to the distance UserDist to obtain an inverse perspective matrix of the front-end optical capturing device. In practical applications, the inverse perspective matrix may be expressed as inv (VPmat, userDist); wherein inv () is the inverse of the matrix function of the computation matrix.
Finally, the electronic device multiplies the screen space coordinate ScreenPos by the inverse perspective matrix to obtain the user position coordinate UserPos. In practical applications, the user position coordinates UserPos may be expressed as UserPos =inv (VPmat, userDist) ScreenPos.
According to the technical scheme, a user image of a current user is acquired through a front-mounted color camera or a front-mounted infrared camera; determining screen space coordinates of the current user based on the position information of the current user in the user image; acquiring a distance between a current user and a screen, and acquiring a perspective matrix of the front-end optical capturing device; performing inverse matrix operation on the perspective matrix according to the distance to obtain an inverse perspective matrix of the front-end optical capturing device; finally, multiplying the screen space coordinate by the inverse perspective matrix to obtain a user position coordinate; therefore, the method and the device can accurately and rapidly calculate and determine the user position coordinates of the current user relative to the preposed optical capturing device based on the two-dimensional position information of the current user in the user image, and are convenient for tracking the watching position information of the current user subsequently.
In an exemplary embodiment, according to a user viewing angle, controlling movement of the virtual camera to obtain a moved virtual camera, including: acquiring an original shooting angle vector of a virtual camera; determining target position coordinates of the virtual camera based on a difference between the original shooting angle vector and the observation angle vector; and controlling the virtual camera to move to the target position coordinate to obtain the moved virtual camera.
The original shooting angle vector is used for representing the visual angle of shooting the three-dimensional object when the virtual camera is at the original position coordinates.
Wherein, according to the difference between the original shooting angle vector and the observation angle vector, a camera transformation matrix of the virtual camera is determined; the camera transformation matrix is used for representing at least one of a target translation mode, a target rotation mode and a target scaling mode aiming at the virtual camera; and multiplying the original position coordinates by the camera transformation matrix to obtain target position coordinates.
In a specific implementation, in a process that the electronic device controls the virtual camera to move according to a viewing angle observed by a user and displays a target three-dimensional image, the electronic device can acquire an original shooting angle vector used for representing the viewing angle of shooting the three-dimensional object when the virtual camera is at an original position coordinate. Then, the electronic device may determine a target translation mode, a target rotation mode, and a target scaling mode for the virtual camera based on a difference between the original photographing angle vector and the observation angle vector. For example, a translation direction and/or a translation distance to a virtual camera in a virtual environment; a rotation direction and/or a rotation angle of the virtual camera in the virtual environment; scaling directions and/or scaling factors for virtual cameras in a virtual environment. And finally, the electronic equipment moves the virtual camera based on the target translation mode, the target rotation mode and the target scaling mode, so that the moved virtual camera can shoot a three-dimensional object at the target camera view angle which is matched with or the same as the user observation view angle, and a target three-dimensional image is obtained.
In the process of determining the target position coordinates of the virtual camera based on the difference between the original shooting angle vector and the observation angle vector, the electronic equipment can determine a camera transformation matrix corresponding to the virtual camera according to the difference between the original shooting angle vector and the observation angle vector; the camera transformation matrix is used for representing at least one of a target translation mode, a target rotation mode and a target scaling mode aiming at the virtual camera; finally, the electronic equipment transforms the camera pose of the virtual camera based on the pair of camera transformation matrixes, namely, the electronic equipment multiplies the original position coordinates of the virtual camera by the left of the camera transformation matrixes to obtain target position coordinates.
In practical application, the electronic device may normalize the observation angle vector V, perform atan2 calculation, obtain three angles of the observation angle vector V relative to three axial directions x, y, and z in a rectangular coordinate system, record the three angles as a three-dimensional vector anglevec. Wherein normalize () is a normalization function; atan2 () is a function that returns azimuth in the C language.
The electronic device usage vector AngleVec may then represent the angle and direction that the virtual camera needs to transform, from which a four-dimensional affine transformation matrix, i.e., camera transformation matrix CamMat, may be constructed. For the convenience of calculation, the transformation of the three-dimensional vector is disassembled into translation, rotation and scaling. Wherein the translation calculation is the original coordinates plus the modular length of the vector, expressed in a translation matrix form:
Trans={1,0,0,|AngleVec.x|,
0,1,0,|AngleVec.y|,
0,0,1,-1*|AngleVec.z|,
0,0,0,1};
Wherein, |AngleVec.x|, |AngleVec.y|, and|AngleVec.z| are the modulo lengths of the vector in xyz three directions, respectively.
The rotation is calculated as the projection of the original coordinate under the rectangular coordinate after rotation, namely cos and sin values of the included angles of the vector and the coordinate axis. And rotation can be disassembled into rotation projections in three directions of xyz, and the rotation projections are calculated respectively and expressed in a form of a rotation matrix:
wherein, X axis rotation is:
RotX={1,0,0,0,
0,cos(AngleVec.x),-sin(AngleVec.x),0,
0,sin(AngleVec.x),cos(AngleVec.x),0,
0,0,0,1};
Where anglevec.x is the angle between the transformation vector and the X-axis.
Wherein, the Y-axis rotation is:
RotY={cos(AngleVec.y),0,-sin(AngleVec.y),0,
0,1,0,0,
-sin(AngleVec.y),0,cos(AngleVec.y),0,
0,0,0,1}
where anglevec.y is the angle between the transformation vector and the Y-axis.
Wherein, the Z-axis rotation is:
RotZ={cos(AngleVec.z),-sin(AngleVec.z),0,0,
sin(AngleVec.z),cos(AngleVec.z),0,0,
0,0,1,0,
0,0,0,1}
where anglevec.z is the angle between the transformation vector and the Z-axis.
Since the present calculation does not involve scaling transformation, the components in the translation transformation and rotation transformation are combined, and the final affine transformation matrix (camera transformation matrix) can be obtained according to the rule of translation followed by rotation, which is expressed as:
MatCam=RotZ*RotY*RotX*Trans:
the electronic device then calculates the coordinates to which the virtual camera is to be moved and renders the three-dimensional object.
NewPos = MatCam x PosOrig: the original coordinates PosOrig of the three-dimensional virtual camera are multiplied by the camera transformation matrix MatCam, so that transformed camera coordinates, i.e., target position coordinates NewPos, can be obtained.
And the electronic equipment moves the three-dimensional camera to the target position coordinate, renders the three-dimensional object according to the target position coordinate, and displays the three-dimensional object on a screen of the electronic equipment to obtain a target three-dimensional image.
According to the technical scheme of the embodiment, the virtual camera is controlled to move according to the observation visual angle of a user, a target three-dimensional image is displayed, and an original shooting angle vector of the virtual camera is obtained; determining a camera transformation matrix of the virtual camera based on a difference between the original photographing angle vector and the observation angle vector; the camera transformation matrix is used for representing at least one of a target translation mode, a target rotation mode and a target scaling mode aiming at the virtual camera; multiplying the original position coordinates by the camera transformation matrix to obtain target position coordinates and determining the target position coordinates of the virtual camera; controlling the virtual camera to move to the target position coordinates to shoot a three-dimensional object, and obtaining a target three-dimensional image; therefore, based on the observation angle vector acquired in real time, the camera pose of the virtual camera is quickly transformed, so that the moved virtual camera can shoot a three-dimensional object at the target camera view angle which is matched with or the same as the user observation view angle, and a target three-dimensional image with real perspective is obtained.
In an exemplary embodiment, if the front-end optical capturing device is a front-end depth camera, tracking, in real-time, viewing pose information of a current user by the front-end optical capturing device includes: acquiring user depth information of a current user in real time through a front-end depth camera; based on the difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device, user position coordinates of the current user relative to the pre-optical capture device are determined.
The depth camera may include a structured light camera, a TOF (time of flight) camera, a binocular stereo vision camera, among others.
Wherein the user depth information includes three-dimensional position coordinates of the current user.
In a specific implementation, if the front-end optical capturing device is a front-end depth camera, the electronic device may acquire, in real time, user depth information including three-dimensional position coordinates of the current user through the front-end depth camera, for example, a structured light camera, in a process of tracking, in real time, viewing gesture information of the current user through the front-end optical capturing device. The electronic device may then determine three-dimensional position coordinates of the front-end optical capture device based on the predetermined camera calibration results. Finally, the electronic device determines the user position coordinates of the current user relative to the pre-optical capture device based on the difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device.
According to the technical scheme, if the front-end optical capturing device is a front-end depth camera, the three-dimensional position coordinates of the current user are obtained in real time through the front-end depth camera, and the user position coordinates of the current user relative to the front-end optical capturing device are rapidly and accurately determined based on the difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the front-end optical capturing device.
Fig. 3 is a flowchart illustrating another three-dimensional image display method according to an exemplary embodiment, which is used in the electronic device 110 of fig. 1, as shown in fig. 3, and includes the following steps.
In step S302, a user image of the current user is acquired in real time through a front-end optical capturing device of the electronic device.
In step S304, screen space coordinates of the current user are determined based on the position information of the current user in the user image.
In step S306, a distance between the current user and the screen is acquired, and a perspective matrix of the front-end optical capturing device is acquired.
In step S308, according to the distance, an inverse matrix operation is performed on the perspective matrix, so as to obtain an inverse perspective matrix of the front-end optical capturing device.
In step S310, the screen space coordinates are multiplied by the inverse perspective matrix to obtain the user position coordinates of the current user with respect to the front-end optical capturing device.
In step S312, screen position coordinates of the screen with respect to the front-end optical capturing device are acquired.
In step S314, an observation angle vector of the current user is generated based on the difference between the user position coordinates and the screen position coordinates.
In step S316, an original shooting angle vector of the virtual camera is obtained; the original shooting angle vector is used for representing the visual angle of the three-dimensional object when the virtual camera is located at the original position coordinate.
In step S318, a camera transformation matrix of the virtual camera is determined according to a difference between the original photographing angle vector and the observation angle vector.
In step S320, the camera transformation matrix is multiplied by the original position coordinate to obtain a target position coordinate.
In step S322, the virtual camera is controlled to move to the target position coordinate, so as to obtain a moved virtual camera.
In step S324, a target three-dimensional image obtained by photographing the three-dimensional object by the post-movement virtual camera is displayed.
It should be noted that, the specific limitation of the above steps may be referred to the specific limitation of the method for displaying a three-dimensional image, which is not described herein.
It should be understood that, although the steps in the flowcharts of fig. 2 and 3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in fig. 2 and 3 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the execution of the steps or stages is not necessarily sequential, but may be performed in rotation or alternatively with at least a portion of the steps or stages in other steps or steps.
Fig. 4 is a block diagram of a display apparatus for three-dimensional images according to an exemplary embodiment, which is applied to an electronic device, and a screen of the electronic device is used to display a screen obtained by photographing the three-dimensional object with a virtual camera. Referring to fig. 4, the apparatus includes:
a capturing unit 410 configured to perform capturing a user viewing angle in real time through a front-end optical capturing device of the electronic device; the user observation visual angle is the visual angle of a three-dimensional object displayed on a screen of the electronic equipment observed by a current user;
A control unit 420 configured to perform control of the movement of the virtual camera according to the user viewing angle, resulting in a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after movement is the same as the user observation visual angle;
and a display unit 430 configured to perform display of a target three-dimensional image obtained by photographing the three-dimensional object by the post-movement virtual camera.
In one embodiment, the capturing unit 410 is specifically configured to perform real-time tracking of the viewing pose information of the current user by the front-end optical capturing device; and determining a user observation visual angle of the current user for observing the three-dimensional object based on the observation gesture information.
In one embodiment, the viewing pose information comprises user position coordinates of the current user relative to the front optical capturing device, the capturing unit 410 being specifically configured to perform acquiring screen position coordinates of the screen relative to the front optical capturing device; generating an observation angle vector of the current user based on the difference between the user position coordinates and the screen position coordinates; the viewing angle vector is used to characterize a user viewing angle at which the current user views the three-dimensional object.
In one embodiment, the capturing unit 410 is specifically configured to obtain, in real time, a user image of the current user through the front-end optical capturing device; determining screen space coordinates of the current user based on the position information of the current user in the user image; and acquiring the distance between the current user and the screen, and determining the user position coordinate of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinate.
In one embodiment, the capturing unit 410 is specifically configured to perform acquiring a perspective matrix of the front-end optical capturing device; according to the distance, performing inverse matrix operation on the perspective matrix to obtain an inverse perspective matrix of the front-end optical capturing device; and multiplying the screen space coordinate by the inverse perspective matrix to obtain the user position coordinate.
In one embodiment, the control unit 420 is specifically configured to perform obtaining an original shooting angle vector of the virtual camera; the original shooting angle vector is used for representing the visual angle of the three-dimensional object when the virtual camera is positioned at the original position coordinate; determining target position coordinates of the virtual camera based on a difference between the original shooting angle vector and the observation angle vector; and controlling the virtual camera to move to the target position coordinate to obtain the moved virtual camera.
In one embodiment, the control unit 420 is specifically configured to perform determining a camera transformation matrix of the virtual camera according to a difference between the original shooting angle vector and the observation angle vector; the camera transformation matrix is used for representing movement information of the virtual camera; and multiplying the original position coordinate by the camera transformation matrix to obtain the target position coordinate.
In one embodiment, the capturing unit 410 is specifically configured to obtain, in real time, user depth information of the current user through the front-end optical capturing device; the user depth information comprises three-dimensional position coordinates of the current user; user position coordinates of the current user relative to the pre-optical capture device are determined based on a difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a block diagram illustrating an apparatus 500 for performing a display method of a three-dimensional image according to an exemplary embodiment. For example, device 500 may be a mobile phone, computer, digital broadcast electronic device, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 5, device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 504 is configured to store various types of data to support operations at device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, video, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read Only Memory (EEPROM), erasable Programmable Read Only Memory (EPROM), programmable Read Only Memory (PROM), read Only Memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the device 500. Power supply components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 500.
The multimedia component 508 includes a screen between the device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the device 500. For example, the sensor assembly 514 may detect the on/off state of the device 500, the relative positioning of the components, such as the display and keypad of the device 500, the sensor assembly 514 may also detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, the orientation or acceleration/deceleration of the device 500, and a change in temperature of the device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the device 500 and other devices, either wired or wireless. The device 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a computer readable storage medium is also provided, such as a memory Z04, comprising instructions executable by a processor Z20 of the electronic device Z00 to perform the above method. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program stored in a computer readable storage medium, the computer program being executable by the processor Z20 of the electronic device Z00 to perform the above-described method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

1. A method of displaying a three-dimensional image, the method comprising:
Capturing a user observation visual angle in real time through a front-end optical capturing device of the electronic device; the method specifically comprises the following steps: acquiring a user image of a current user in real time through the front-end optical capturing device; determining screen space coordinates of the current user based on the position information of the preset key points of the current user in the user image; the preset key points are head key points or face key points; acquiring the distance between the current user and the screen; acquiring a perspective matrix of the front-end optical capturing device, and performing inverse matrix operation on the perspective matrix according to the distance to obtain an inverse perspective matrix of the front-end optical capturing device; multiplying the screen space coordinate by an inverse perspective matrix to obtain a user position coordinate; generating the user viewing angle based on the screen position coordinates and the user position coordinates;
Controlling the virtual camera to move according to the user observation visual angle to obtain a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after movement is the same as the user observation visual angle;
and displaying a target three-dimensional image obtained by shooting the three-dimensional object through the virtual camera after moving.
2. The method for displaying three-dimensional images according to claim 1, wherein capturing the user viewing angle in real time by a front-end optical capturing device of the electronic device comprises:
Tracking, in real time, viewing pose information of the current user through the front-end optical capturing device;
and determining a user observation visual angle of the current user for observing the three-dimensional object based on the observation gesture information.
3. The method of claim 2, wherein the viewing pose information includes user position coordinates of the current user relative to the front-facing optical capture device, wherein determining a user viewing perspective of the current user viewing the three-dimensional object based on the viewing pose information comprises:
acquiring screen position coordinates of the screen relative to the front-end optical capturing device;
generating an observation angle vector of the current user based on the difference between the user position coordinates and the screen position coordinates; the viewing angle vector is used to characterize a user viewing angle at which the current user views the three-dimensional object.
4. A method of displaying a three-dimensional image according to claim 3, wherein said tracking in real time, by said pre-optical capturing device, viewing pose information of said current user comprises:
Acquiring a user image of the current user in real time through the front-end optical capturing device;
Determining screen space coordinates of the current user based on the position information of the current user in the user image;
And acquiring the distance between the current user and the screen, and determining the user position coordinate of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinate.
5. The method of claim 4, wherein determining the user position coordinates of the current user relative to the front-facing optical capture device based on the distance and the screen space coordinates comprises:
Obtaining a perspective matrix of the front-end optical capturing device;
According to the distance, performing inverse matrix operation on the perspective matrix to obtain an inverse perspective matrix of the front-end optical capturing device;
And multiplying the screen space coordinate by the inverse perspective matrix to obtain the user position coordinate.
6. A method for displaying a three-dimensional image according to claim 3, wherein controlling the movement of the virtual camera according to the viewing angle of the user to obtain a moved virtual camera comprises:
Acquiring an original shooting angle vector of the virtual camera; the original shooting angle vector is used for representing the visual angle of the three-dimensional object when the virtual camera is positioned at the original position coordinate;
Determining target position coordinates of the virtual camera based on a difference between the original shooting angle vector and the observation angle vector;
And controlling the virtual camera to move to the target position coordinate to obtain the moved virtual camera.
7. The method of claim 6, wherein determining the target position coordinates of the virtual camera based on the difference between the original photographing angle vector and the observation angle vector, comprises:
Determining a camera transformation matrix of the virtual camera according to the difference between the original shooting angle vector and the observation angle vector; the camera transformation matrix is used for representing movement information of the virtual camera;
and multiplying the original position coordinate by the camera transformation matrix to obtain the target position coordinate.
8. A method of displaying a three-dimensional image according to claim 3, wherein said tracking in real time, by said pre-optical capturing device, viewing pose information of said current user comprises: acquiring user depth information of the current user in real time through the front-end optical capturing device; the user depth information comprises three-dimensional position coordinates of the current user;
user position coordinates of the current user relative to the pre-optical capture device are determined based on a difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device.
9. A display device for three-dimensional images, the device comprising:
A capturing unit configured to perform capturing a user viewing angle in real time through a front-end optical capturing device of the electronic device; the method specifically comprises the following steps: acquiring a user image of a current user in real time through the front-end optical capturing device; determining screen space coordinates of the current user based on the position information of the preset key points of the current user in the user image; the preset key points are head key points or face key points; acquiring the distance between the current user and the screen; acquiring a perspective matrix of the front-end optical capturing device, and performing inverse matrix operation on the perspective matrix according to the distance to obtain an inverse perspective matrix of the front-end optical capturing device; multiplying the screen space coordinate by an inverse perspective matrix to obtain a user position coordinate; generating the user viewing angle based on the screen position coordinates and the user position coordinates;
a control unit configured to perform control of movement of the virtual camera according to the user viewing angle, resulting in a moved virtual camera; the visual angle of the three-dimensional object shot by the virtual camera after movement is the same as the user observation visual angle;
And a display unit configured to perform display of a target three-dimensional image obtained by photographing the three-dimensional object by the post-movement virtual camera.
10. The three-dimensional image display apparatus according to claim 9, wherein the capturing unit is specifically configured to perform real-time tracking of viewing pose information of the current user by the front-end optical capturing device; and determining a user observation visual angle of the current user for observing the three-dimensional object based on the observation gesture information.
11. The apparatus according to claim 10, wherein the viewing pose information comprises user position coordinates of the current user relative to the front optical capturing device, the capturing unit being in particular configured to perform acquiring screen position coordinates of the screen relative to the front optical capturing device; generating an observation angle vector of the current user based on the difference between the user position coordinates and the screen position coordinates; the viewing angle vector is used to characterize a user viewing angle at which the current user views the three-dimensional object.
12. The three-dimensional image display device according to claim 11, wherein the capturing unit is specifically configured to perform real-time acquisition of the user image of the current user through the front-end optical capturing apparatus; determining screen space coordinates of the current user based on the position information of the current user in the user image; and acquiring the distance between the current user and the screen, and determining the user position coordinate of the current user relative to the front-end optical capturing device according to the distance and the screen space coordinate.
13. The display apparatus of three-dimensional images according to claim 12, wherein the capturing unit is specifically configured to perform acquisition of a perspective matrix of the front-facing optical capturing device; according to the distance, performing inverse matrix operation on the perspective matrix to obtain an inverse perspective matrix of the front-end optical capturing device; and multiplying the screen space coordinate by the inverse perspective matrix to obtain the user position coordinate.
14. The three-dimensional image display device according to claim 11, wherein the control unit is specifically configured to perform acquisition of an original shooting angle vector of the virtual camera; the original shooting angle vector is used for representing the visual angle of the three-dimensional object when the virtual camera is positioned at the original position coordinate; determining target position coordinates of the virtual camera based on a difference between the original shooting angle vector and the observation angle vector; and controlling the virtual camera to move to the target position coordinate to obtain the moved virtual camera.
15. The three-dimensional image display device according to claim 14, wherein the control unit is specifically configured to perform determining a camera transformation matrix of the virtual camera from a difference between the original shooting angle vector and the observation angle vector; the camera transformation matrix is used for representing movement information of the virtual camera; and multiplying the original position coordinate by the camera transformation matrix to obtain the target position coordinate.
16. The three-dimensional image display device according to claim 11, wherein the capturing unit is specifically configured to perform real-time acquisition of user depth information of the current user through the front-end optical capturing apparatus; the user depth information comprises three-dimensional position coordinates of the current user; user position coordinates of the current user relative to the pre-optical capture device are determined based on a difference between the three-dimensional position coordinates of the current user and the three-dimensional position coordinates of the pre-optical capture device.
17. An electronic device, comprising:
A processor;
A memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the method of displaying a three-dimensional image as claimed in any one of claims 1 to 8.
18. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of displaying a three-dimensional image according to any one of claims 1 to 8.
19. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method of displaying a three-dimensional image according to any one of claims 1 to 8.
CN202110571138.7A 2021-05-25 2021-05-25 Three-dimensional image display method and device, electronic equipment and storage medium Active CN113238656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110571138.7A CN113238656B (en) 2021-05-25 2021-05-25 Three-dimensional image display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110571138.7A CN113238656B (en) 2021-05-25 2021-05-25 Three-dimensional image display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113238656A CN113238656A (en) 2021-08-10
CN113238656B true CN113238656B (en) 2024-04-30

Family

ID=77138610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110571138.7A Active CN113238656B (en) 2021-05-25 2021-05-25 Three-dimensional image display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113238656B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900526A (en) * 2021-10-29 2022-01-07 深圳Tcl数字技术有限公司 Three-dimensional human body image display control method and device, storage medium and display equipment
CN114047823B (en) * 2021-11-26 2024-06-11 贝壳找房(北京)科技有限公司 Three-dimensional model display method, computer-readable storage medium and electronic device
CN114842179B (en) * 2022-05-20 2024-09-17 青岛海信医疗设备股份有限公司 Matching method of organ three-dimensional model and intraoperative organ image and electronic equipment
CN116400878B (en) * 2023-06-07 2023-09-08 优奈柯恩(北京)科技有限公司 Display method and device of head-mounted display device, electronic device and storage medium
CN117853694A (en) * 2024-03-07 2024-04-09 河南百合特种光学研究院有限公司 Virtual-real combined rendering method of continuous depth

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102305970A (en) * 2011-08-30 2012-01-04 福州瑞芯微电子有限公司 Naked eye three-dimensional display method and structure for automatically tracking human eye position
CN103402106A (en) * 2013-07-25 2013-11-20 青岛海信电器股份有限公司 Method and device for displaying three-dimensional image
CN105657406A (en) * 2015-12-31 2016-06-08 北京小鸟看看科技有限公司 Three-dimensional observation perspective selecting method and apparatus
CN106454315A (en) * 2016-10-26 2017-02-22 深圳市魔眼科技有限公司 Adaptive virtual view-to-stereoscopic view method and apparatus, and display device
CN108376424A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Method, apparatus, equipment and storage medium for carrying out view angle switch to three-dimensional virtual environment
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9325960B2 (en) * 2011-11-07 2016-04-26 Autodesk, Inc. Maintenance of three dimensional stereoscopic effect through compensation for parallax setting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102305970A (en) * 2011-08-30 2012-01-04 福州瑞芯微电子有限公司 Naked eye three-dimensional display method and structure for automatically tracking human eye position
CN103402106A (en) * 2013-07-25 2013-11-20 青岛海信电器股份有限公司 Method and device for displaying three-dimensional image
CN105657406A (en) * 2015-12-31 2016-06-08 北京小鸟看看科技有限公司 Three-dimensional observation perspective selecting method and apparatus
CN106454315A (en) * 2016-10-26 2017-02-22 深圳市魔眼科技有限公司 Adaptive virtual view-to-stereoscopic view method and apparatus, and display device
CN108376424A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Method, apparatus, equipment and storage medium for carrying out view angle switch to three-dimensional virtual environment
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113238656A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113238656B (en) Three-dimensional image display method and device, electronic equipment and storage medium
CN108182730B (en) Virtual and real object synthesis method and device
WO2022037285A1 (en) Camera extrinsic calibration method and apparatus
CN113643356B (en) Camera pose determination method, virtual object display method, device and electronic equipment
CN114025105B (en) Video processing method, device, electronic equipment and storage medium
CN109410276B (en) Key point position determining method and device and electronic equipment
US11252341B2 (en) Method and device for shooting image, and storage medium
CN112738420B (en) Special effect implementation method, device, electronic equipment and storage medium
CN114067085A (en) Virtual object display method and device, electronic equipment and storage medium
CN109218709B (en) Holographic content adjusting method and device and computer readable storage medium
CN113345000A (en) Depth detection method and device, electronic equipment and storage medium
CN111340690B (en) Image processing method, device, electronic equipment and storage medium
CN109934168B (en) Face image mapping method and device
CN111862288B (en) Pose rendering method, device and medium
JP7160887B2 (en) Image display method and device, electronic device, computer-readable storage medium
CN114078279B (en) Motion capture method, motion capture device, electronic equipment and storage medium
CN112883791B (en) Object recognition method, object recognition device, and storage medium
CN114430457B (en) Shooting method, shooting device, electronic equipment and storage medium
CN114078280A (en) Motion capture method, motion capture device, electronic device and storage medium
CN114155175B (en) Image generation method, device, electronic equipment and storage medium
CN110458962B (en) Image processing method and device, electronic equipment and storage medium
CN113569066B (en) Multimedia display method, device, electronic equipment, server and storage medium
CN113138660B (en) Information acquisition method and device, mobile terminal and storage medium
CN118695084A (en) Image processing method and device, electronic equipment and storage medium
CN118435239A (en) Camera calibration method and device, augmented reality device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant