CN116883553A - Image generation method, device, electronic equipment and readable storage medium - Google Patents

Image generation method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116883553A
CN116883553A CN202310833229.2A CN202310833229A CN116883553A CN 116883553 A CN116883553 A CN 116883553A CN 202310833229 A CN202310833229 A CN 202310833229A CN 116883553 A CN116883553 A CN 116883553A
Authority
CN
China
Prior art keywords
preset
image
virtual object
head
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310833229.2A
Other languages
Chinese (zh)
Inventor
罗志平
艾永春
蒋晨晨
陈霖甲
蔡永辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202310833229.2A priority Critical patent/CN116883553A/en
Publication of CN116883553A publication Critical patent/CN116883553A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image generation method, an image generation device, electronic equipment and a readable storage medium, and relates to the technical field of image processing. The image generation method comprises the following steps: acquiring a head joint angle of a virtual object, wherein a virtual camera is arranged at the head of the virtual object; generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object; determining a first camera view angle of the virtual camera according to the head bone animation; a viewpoint image at the first camera perspective is generated based on the neuro-radiation field.

Description

Image generation method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image generation method, an image generation device, electronic equipment and a readable storage medium.
Background
The free viewpoint video technique allows a user to autonomously select a viewpoint and generate a dynamic scene video designating a new viewpoint picture to be gradually applied in the video play field. With the continuous and deep research of nerve radiation fields, the generation of free viewpoint video based on new angles of nerve radiation fields is becoming a mainstream research direction. Currently, the switching of the first person viewing angle is usually achieved by combining photographed pictures of different angles, which causes the switching of the first person viewing angle picture to be not smooth.
Disclosure of Invention
The embodiment of the application provides an image generation method, an image generation device, electronic equipment and a readable storage medium, which can solve the problem of unsmooth switching of a first person view angle picture in the related technology.
In a first aspect, an embodiment of the present application provides an image generating method, including:
acquiring a head joint angle of a virtual object, wherein a virtual camera is arranged at the head of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object;
determining a first camera view angle of the virtual camera according to the head bone animation;
a viewpoint image at the first camera perspective is generated based on the neuro-radiation field.
In a second aspect, an embodiment of the present application provides an image generating apparatus, including:
the first acquisition module is used for acquiring the head joint angle of the virtual object, and the head of the virtual object is provided with a virtual camera;
the first generation module is used for generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object;
a first determining module for determining a first camera perspective of the virtual camera according to the head bone animation;
And the second generation module is used for generating a viewpoint image under the first camera view angle based on the nerve radiation field.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the image generation method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the image generation method according to the first aspect.
In the embodiment of the application, the head skeleton animation of the virtual object is generated based on the preset mixing space and the head joint angle of the virtual object, so that the generated head skeleton animation is positioned in the preset mixing space, thereby ensuring the smoothness of the head skeleton animation and further ensuring the smoothness of the finally generated viewpoint image.
Drawings
Fig. 1 is a flowchart of an image generating method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a preset mixing space in an embodiment of the present application;
FIG. 3 is a schematic diagram of determining boundary points according to focal length and image depth in an embodiment of the application;
fig. 4 is a block diagram of an image generating apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image generation method, the device, the electronic equipment and the like provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of an image generating method according to an embodiment of the present application, where the method may be applied to an electronic device such as a mobile phone, a tablet computer, a computer, or an intelligent wearable device, and in a subsequent embodiment, the electronic device is used as an execution body of the method according to the embodiment of the present application, to explain a technical solution according to the embodiment of the present application.
As shown in fig. 1, the image generating method provided by the embodiment of the application includes the following steps:
step 101, acquiring a head joint angle of a virtual object, wherein a virtual camera is arranged at the head of the virtual object.
As an optional implementation scenario, the method provided by the embodiment of the application may be applied to generation of a video image in an electronic device. Alternatively, the virtual object may be a virtual character, for example, a specific virtual character in a video, or an avatar generated by the electronic device according to a real character captured by a camera.
Illustratively, the electronic device captures the face of the user in real time through the camera and generates a corresponding virtual object, such as a virtual person; according to the head image of the user captured by the camera, a most advanced (state of the art, SOTA) head posture estimation algorithm is adopted, such as a method described by Fine-grained head posture estimation without key points (Fine-Grained Head Pose Estimation Without Keypoints), to estimate the head joint angle of the user, and the head joint angle of the user is taken as the head joint angle of a virtual person (i.e. a virtual object). Alternatively, the head joint angle may be a roll angle (roll), pitch angle (pitch), and yaw angle (yaw) including the user's head joint.
Alternatively, the electronic device may determine the head joint angle of the virtual object based on the user's input. Optionally, the acquiring the head joint angle of the virtual object includes:
acquiring a first input;
a head joint angle of a virtual object is determined from the first input, wherein the head joint angle includes one or more of a roll angle, a pitch angle, and a yaw angle of a head joint of the virtual object.
Illustratively, the head joint angles include roll, pitch and yaw angles of a head joint of the virtual object. The electronic equipment display interface can be provided with three input boxes which correspond to the roll angle, the pitch angle and the yaw angle respectively, and then the electronic equipment can determine the head joint angle of the virtual object by receiving numerical values input by a user in the three input boxes, so that the user can set and adjust the head joint angle of the virtual object through the first input, and the setting of the head joint angle of the virtual object is more flexible.
Alternatively, the first input may also be other possible forms, for example, the first input may also be a voice input, a gesture input, etc., or it is based on the activity of the user's head that determines the head joint angle of the virtual object.
In the embodiment of the application, the head of the virtual object is provided with a virtual camera, and the virtual object is used as a first person viewing angle agent for watching video; the video shot by the virtual camera is equivalent to the viewing angle of the virtual object. It can be appreciated that if the head of the virtual object is rotated, the angle of view of the virtual object will also change; the virtual camera is arranged at the head of the virtual object, so that the movement of the head of the virtual object and the virtual intersecting visual angle can be correspondingly changed. Thus, by acquiring the head joint angle of the virtual object, it is equivalent to acquiring the angle of view of the virtual camera.
Step 102, generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object.
The head skeleton animation of the virtual object may be generated based on the head joint angle of the virtual object. For example, assuming that the head of the virtual object rotates, a plurality of head joint angles in the process of rotating the head of the virtual object can be obtained, and based on the plurality of head joint angles, a head skeleton animation corresponding to the process of rotating the head of the virtual object can be correspondingly generated. The specific implementation principle of generating the head skeleton animation according to the head joint angle may refer to the related technology, and this embodiment is not described in detail.
In the embodiment of the application, after the electronic equipment acquires the head joint angle of the virtual object, the electronic equipment generates the head skeleton animation of the virtual object based on the preset mixing space and the head joint angle. The preset mixing space may be a preset cube space, the point positions in the preset cube space may correspond to preset head skeleton animation and head joint angles, and after the electronic device obtains the head joint angles of the virtual objects, the head skeleton animation corresponding to the preset mixing space may be obtained based on the head joint angles, so as to generate the head skeleton animation of the virtual objects. In this way, the generated head bone animation can be limited in the preset mixing space, so that the smoothness of the bone animation is ensured.
Step 103, determining a first camera view angle of the virtual camera according to the head bone animation.
In the embodiment of the application, after the electronic device generates the head skeleton animation of the virtual object based on the preset mixing space and the head joint angle of the virtual object, the visual angle of the eyes of the virtual object can be determined according to the head skeleton animation, and the virtual camera is arranged on the head of the virtual object, so that the visual angle of the eyes of the virtual object is equivalent to the camera visual angle of the virtual camera, namely the first camera visual angle.
Step 104, generating a viewpoint image under the first camera view angle based on the nerve radiation field.
The neural radiation field (Neural Radiance Field, neRF) may be comprised of a trained neural network. Optionally, the nerve radiation field includes a multi-layer sensing network model, the multi-layer sensing network model is obtained based on training of a preset image data set, each image in the preset image data set includes a corresponding camera view angle, a camera internal reference and an image depth, the input of the multi-layer sensing network model is the camera view angle, and the output is a viewpoint image corresponding to the camera view angle.
The camera internal parameters can be parameters including focal length, image resolution and the like of the camera. It should be noted that, the preset image dataset includes a plurality of images, an execution body for training the multi-layer perceptual network model, for example, an electronic device (which may be different from the electronic device executing the image generating method of the present application) may estimate a camera view angle (such as rotation and translation), a camera internal reference, and an image depth of each image by using a COLMAP method, and train the multi-layer perceptual network model based on the preset image dataset, where each image in the preset image dataset includes a corresponding camera view angle, that is, a camera view angle of the image is obtained, and the preset image dataset is used as an input of the multi-layer perceptual network model, and the input of the multi-layer perceptual network model includes the camera view angle of each image, and the output of the multi-layer perceptual network model is a viewpoint image corresponding to the camera view angle. By training the multi-layer perception network model, the trained multi-layer perception network model can output corresponding viewpoint images according to the input camera visual angle, and has higher accuracy.
In the technical scheme provided by the embodiment of the application, the electronic equipment acquires the head joint angle of the virtual object, the head of the virtual object is provided with the virtual camera, the visual angle of eyes of the virtual object is equal to the camera visual angle of the virtual camera, and the rotation angle of the head of the virtual object is equal to the change angle of the camera visual angle of the virtual camera; generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object, determining a first camera view angle of the virtual camera according to the head skeleton animation, and generating a viewpoint image under the first camera view angle based on a nerve radiation field.
In the embodiment of the application, the head skeleton animation of the virtual object is generated based on the preset mixing space and the head joint angle of the virtual object, so that the generated head skeleton animation is positioned in the preset mixing space, thereby ensuring the smoothness of the head skeleton animation and further ensuring the smoothness of the finally generated viewpoint image. And the virtual camera is arranged at the head of the virtual object, so that the first camera view angle is the first person view angle of the virtual object, and the finally generated viewpoint image is the browsing image under the first person view angle, thereby being more beneficial to improving the watching experience of the user in the video scene.
Optionally, the preset mixing space is a cubic space, and one vertex of the cubic space corresponds to a preset head skeleton animation and a preset head joint angle; the step 102 may include:
performing linear interpolation processing on the head joint angle of the virtual object to obtain the weight of the head joint angle of the virtual object at each vertex of the cube space;
fusing preset head joint angles corresponding to the vertexes of the cube space according to the weights of the vertexes to obtain target gestures corresponding to the head joint angles of the virtual object;
and determining the head skeleton animation corresponding to the target gesture according to the preset head skeleton animation.
The preset mixing space may be a cubic space previously established based on a plurality of preset head bone animations and a plurality of preset head joint angles. It will be appreciated that the cube space includes eight vertices corresponding to eight preset head bone animations and eight preset head joint angles, respectively. Specifically, referring to fig. 2, the eight preset head skeleton animations and the eight preset head joint angles respectively include: bone animation with upward head and its upward skeletal pose (i.e. head joint angle) sense defined by (roll, pith, yaw) 0 Bone animation with downward head and posture phase with downward bone 1 Bone animation with head left and gesture phase with skeleton left 2 Bone animation with rightward head and gesture phase with rightward frame 3 Bone animation of head left upper corner and gesture phase of bone left upper corner 4 Bone animation of head right upper corner and gesture phase of bone right upper corner 5 Bone animation of head left lower corner and gesture phase of bone left lower corner 6 Bone animation of lower right corner of head and gesture phase of lower right corner of skeleton 7 . In addition, add a free skeletal animation and pose phase centered in the cube idle A total of 9 skeletal animations and poses.
In the embodiment of the present application, based on the cube representation, for a given head joint angle, for example, the head joint angle of the virtual object, the weight value table of eight vertices of the preset mixing space may be obtained by a tri-linear interpolation method, where the weight value table is as follows: (omega) 01234567 ). Note that, the specific implementation process of the tri-linear interpolation method may refer to the related art, and this embodiment is not described in detail.
Further, according to the weight of each vertex, fusing the preset head skeleton animation corresponding to each vertex of the cube space and the preset head joint angle to obtain the target gesture corresponding to the head joint angle of the virtual object. Illustratively, the cube void is fused according to the weight of each vertex The inter-skeleton pose (i.e., the preset head joint angle for each vertex) may be expressed asThe final target posture is
In the preset mixing space, each vertex comprises a corresponding preset head skeleton animation and a preset head joint angle (namely gesture), and after a target gesture corresponding to the head joint angle of the virtual object is obtained, the head skeleton animation corresponding to the target gesture can be obtained according to the relation between the preset head skeleton animation and the preset head gesture.
In the embodiment of the application, the head bone animation and the preset head joint angle preset by each vertex of the preset mixing space are used for fusing the head joint angle of the virtual object into the preset mixing space, and the head bone animation corresponding to the head joint angle of the virtual object is obtained. Therefore, the head skeleton animation corresponding to the head joint angle of the virtual object is positioned in the preset mixing space, and the head skeleton animation is limited in a certain range, so that the curve smoothness of the head skeleton animation is ensured. The first camera view angle is determined based on the head skeleton animation, so that the curve smoothness of the head skeleton animation is ensured, the smoothness of a view point image generated based on the first camera view angle can be ensured, and the condition of picture jitter is avoided.
The preset head joint angles and the preset head bone animations corresponding to the vertexes of the preset mixing space can be set based on a given range; the head joint angle of the virtual object may also be limited to a certain range, so as to ensure that the head bone animation of the virtual object can be located in the preset mixing space. For example, in the case where the electronic apparatus determines the head joint angle of the virtual object by receiving a numerical value input by the user in the input box, the range of values of the input box may be limited to a range of [ -0.785,0.785] radians between-45 and 45 degrees, so that the preset mixing space may be created based on this range of values.
Optionally, the step 104 may include:
and generating a viewpoint image under the first camera view angle based on the nerve radiation field under the condition that the first camera view angle is positioned in a preset directional bounding box.
In the embodiment of the application, after a first camera view angle of a virtual camera is determined based on head bone animation of the virtual object, whether the first camera view angle is located in a preset directional bounding box (Oriented Bounding Box, OBB) is judged, and under the condition that the first camera view angle is determined to be located in the preset directional bounding box, a viewpoint image under the first camera view angle is generated based on the nerve radiation field.
The preset directional bounding box can be generated based on a camera position and a camera view angle corresponding to each image in a preset image data set of a multi-layer perception network model for training a nerve radiation field, and further corresponds to the camera view angle in a certain range, and the viewpoint image corresponding to the first camera view angle can be generated based on the nerve radiation field only when the first camera view angle is positioned in the preset directional bounding box, so that the range of the first camera view angle is limited, and the generation of the viewpoint image is guaranteed.
Optionally, before the step 104, the method further includes:
generating the preset directed bounding box based on a second camera view angle corresponding to a preset image data set, wherein the preset image data set is a training data set of the multi-layer perception network model;
acquiring a first camera position, a first focal length and a first image depth corresponding to the first camera view angle;
determining a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction according to the first focal length and the first image depth;
And determining that the first camera view angle is positioned in the preset directional bounding box under the condition that the target parts of the first camera position, the first boundary point and the second boundary point are positioned in the preset directional bounding box.
In the embodiment of the application, the nerve radiation field comprises a multi-layer perception network model, the multi-layer perception network model is trained based on a preset image data set, and each image in the preset image data set comprises a camera view angle for obtaining the image correspondingly. It will be appreciated that the preset image dataset comprises a plurality of images, i.e. a plurality of camera perspectives (i.e. second camera perspectives), based on which a preset directed bounding box is generated.
Optionally, the generating the preset directional bounding box based on the second camera view angle corresponding to the preset image data set includes:
acquiring a second camera position, a second focal length and a second image depth corresponding to each image in the preset image data set;
determining a third boundary point of a camera view angle corresponding to a target image in a horizontal direction and a fourth boundary point of the camera view angle in a vertical direction according to a target second focal length and a target second image depth, wherein the target image is any one image in the preset image data set, and the target second focal length and the target second image depth are respectively a second focal length and a second image depth corresponding to the target image;
And generating the preset directed bounding box according to the second camera position, the third boundary point and the fourth boundary point corresponding to each image.
It will be appreciated that each image in the preset image dataset includes a second camera position, a second focal length and a second image depth for obtaining the image, and that a boundary of a camera view angle corresponding to the image in a horizontal direction and a boundary point in a vertical direction can be determined based on the second focal length and the second image depth of the image.
Specifically, taking a horizontal direction as an example, determining a camera view angle corresponding to an image according to a second focal length of the image(Field of view, foV), two boundary points (i.e., a third boundary point) are determined based on the second image depth of the image. For the boundary point, the second focal length may determine the corresponding camera view angle FoV, and it may be assumed that the sensor size of the captured image is common x=43.27 mm, by the formulaWherein tan represents tangent, the maximum value of the image depth is taken, and d is used max Representing, therefore, as shown in FIG. 3 below, the boundary points may be represented by d in a trigonal pyramid determined by the camera position and camera view in the horizontal direction max And intercepting to obtain two third boundary points in the horizontal direction. Based on the similar distance, the vertical direction may also determine two fourth boundary points according to the same calculation process as the horizontal direction, so as to obtain two fourth boundary points of the image in the vertical direction. And further, for any image in the preset image data set, according to the second camera position corresponding to the object, the two third boundary points in the horizontal direction and the two fourth boundary points corresponding to the vertical direction, a triangular pyramid corresponding to the image, namely, the view angle range corresponding to the image can be constructed.
By the method, two third boundary points corresponding to each image in the preset image data set in the horizontal direction and two fourth boundary points corresponding to each image in the vertical direction can be obtained, so that based on the third boundary points and the fourth boundary points and the second camera positions corresponding to each image, space point clouds for determining the camera positions and the view angle ranges are finally obtained, and the space point clouds form the preset directed bounding box.
In the embodiment of the application, a preset directional bounding box is generated according to a second camera position, a second focal length and a second image depth corresponding to an image in a preset image data set, a visual angle range is limited by the preset directional bounding box, and then when the first camera visual angle of a virtual camera is judged to be positioned in the preset directional bounding box, a visual point image corresponding to the first camera visual angle is generated based on a nerve radiation field. It can be appreciated that the multi-layer perceptual network model of the neural radiation field is trained based on the preset image dataset, and the preset directed bounding box is generated based on the relevant parameters corresponding to the images in the preset image dataset, so that when the first camera view angle is located in the preset directed bounding box, the corresponding view image can be generated, and thus, the generated view image can be ensured to have higher accuracy.
Optionally, whether the first camera view angle is located within the preset directional bounding box may be specifically determined by:
acquiring a first camera position, a first focal length and a first image depth corresponding to the first camera view angle, and determining a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction according to the first focal length and the first image depth; and determining that the first camera view angle is positioned in the preset directional bounding box under the condition that the target parts of the first camera position, the first boundary point and the second boundary point are positioned in the preset directional bounding box.
The determining of the first boundary point of the first camera view angle in the horizontal direction and the second boundary point of the first camera view angle in the vertical direction may be performed by referring to the determining manner of the third boundary point and the fourth boundary point. For example, assuming that the first camera view angle is an average camera view angle corresponding to an image in the preset image data set, the first image depth is an average image depth corresponding to an image in the preset image data set, the first focal length is an average focal length corresponding to an image in the preset image data set, based on these parameters, two first boundary points of the first camera view angle in a horizontal direction and two second boundary points of the first camera view angle in a vertical direction can be calculated in the same manner, and a triangle cone is formed at a first camera position corresponding to the first camera view angle, and if the triangle cone is located in the preset directional bounding box, it can be determined that the first camera view angle is located in the preset directional bounding box, so as to ensure that a corresponding viewpoint image can be generated based on the first camera view angle.
Optionally, the determining that the first camera view angle is located in the preset directional bounding box if the target portions of the first camera position, the first boundary point, and the second boundary point are located in the preset directional bounding box includes:
constructing a first cone corresponding to the first camera view angle according to the first camera position, a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction;
and determining that the first camera view angle is positioned in a preset directional bounding box when the first cone is positioned in the preset directional bounding box entirely or when the preset cone part of the first cone is positioned in the preset directional bounding box.
In an exemplary embodiment, after determining a first camera position corresponding to a first camera view angle, and a first boundary point in a horizontal direction and a second boundary point in a vertical direction, the first camera position is taken as a vertex of a triangular pyramid, and a size of an arc of a bottom surface of the triangular pyramid is determined according to the first boundary point and the second boundary point, so that a first pyramid corresponding to the first camera view angle is obtained.
As described above, the preset directional bounding box is a cone formed according to the second camera position, the third boundary point and the fourth boundary point corresponding to each image in the preset image data set, and if the first cone is all located in the preset directional bounding box, it may be determined that the first camera view angle is located in the preset directional bounding box; alternatively, if the predetermined cone portion of the first cone, for example, at least 80% of the first cone is located within the predetermined directional bounding box, the first camera view may be considered to be located within the predetermined directional bounding box. Furthermore, through such a determination, it is ensured that the nerve radiation field can generate the viewpoint image corresponding to the first camera view angle, so as to ensure the accuracy of generating the viewpoint image and the integrity of the image content, and avoid that the viewpoint image cannot be generated when the first camera view angle is not positioned in the preset directional bounding box, or the generated viewpoint image is incomplete because only part of the view angle is positioned in the preset directional bounding box.
Optionally, in an embodiment of the present application, the generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object may further include:
Acquiring a timestamp corresponding to the head joint angle of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is smaller than or equal to a preset time difference;
and generating the head skeleton animation of the virtual object based on a preset mixing space and the historical head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is larger than a preset time difference.
It can be appreciated that, after the electronic device obtains the head joint angle of the virtual object, based on the preset mixing space and the head joint angle, a head skeleton animation of the virtual object is generated, and in this process, time delay may exist, or situations such as blocking or delay caused by hardware or software of the electronic device itself may exist. For example, the angle of the head joint of the virtual object may be obtained by the electronic device capturing an image of the head of the user through the front-end camera, and the frame rate captured by the camera may cause a difference between the frame rate of the front-end and the frame rate of the new viewpoint image in the case of generating the bone animation of the head of the virtual object to drive the nerve radiation.
Optionally, when acquiring the head joint angle of the virtual object, the electronic device may correspondingly generate a timestamp for acquiring the head joint angle; in the process of generating the head skeleton animation of the virtual object, the electronic device may compare whether the time difference between the current time information and the time stamp is smaller than or equal to a preset time difference, and if the time difference is smaller than or equal to the preset time difference, the electronic device indicates that no delay exists or the delay is smaller, and generates the head skeleton animation of the virtual object based on a preset mixing space and the head joint angle; if the description delay is larger than the preset mixing space and the historical head joint angle of the virtual object, generating a head skeleton animation.
In the embodiment of the application, when the time difference between the current time information and the obtained time stamp of the head joint angle is in a reasonable range, generating the head bone animation based on the obtained head joint angle and a preset mixing space, so as to ensure the instantaneity of the finally generated viewpoint image; when the time difference between the current time information and the obtained timestamp of the head joint angle is large and exceeds a reasonable range, the fact that delay exists between the back-end processing process of the electronic equipment and the front-end obtaining of the head joint angle is explained, and head bone animation is generated based on the previously obtained historical head joint angle and a preset mixing space, so that poor watching experience brought to a user due to delay of a finally generated viewpoint image is avoided.
Optionally, under the condition that delay exists between the back-end processing process and the front-end acquisition of the head joint angle of the electronic device, the head joint angle of the virtual object, such as three angles of a roll (pitch, yaw) of the head of the virtual object, can be estimated by linear interpolation according to the current time information, the estimated head joint angle close to the current time information is replaced by the acquired head joint angle, and the head bone animation is further generated through the estimated head joint angle and a preset mixing space, so that delay of a finally generated viewpoint image caused by the back-end delay is avoided, and viewing experience of a user is ensured.
As an optional application scenario, the scheme provided by the embodiment of the application can be applied to a Virtual Reality (VR) application scenario, the VR helmet can be changed along with the change of the head angle of the user, and then the viewing angle of eyes of the user can be mapped to the camera viewing angle of the VR helmet, so that a viewpoint image along with the change of the viewing angle of the user is generated, the picture is smoothly switched, a better viewing experience is brought to the user, the immersive video viewing at any viewing angle is realized, and the three-dimensional scene simulation is not needed as in the related VR technology.
The embodiment of the application also provides an image generation device. Referring to fig. 4, fig. 4 is a block diagram of an image generating apparatus according to an embodiment of the present application, and as shown in fig. 4, an image generating apparatus 400 includes:
a first obtaining module 401, configured to obtain a head joint angle of a virtual object, where a virtual camera is disposed on a head of the virtual object;
a first generation module 402, configured to generate a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object;
a first determining module 403, configured to determine a first camera perspective of the virtual camera according to the head bone animation;
a second generation module 404 is configured to generate a viewpoint image at the first camera perspective based on the neural radiation field.
Optionally, the preset mixing space is a cubic space, and one vertex of the cubic space corresponds to a preset head skeleton animation and a preset head joint angle;
the first generating module 402 is further configured to:
performing linear interpolation processing on the head joint angle of the virtual object to obtain the weight of the head joint angle of the virtual object at each vertex of the cube space;
Fusing preset head joint angles corresponding to the vertexes of the cube space according to the weights of the vertexes to obtain target gestures corresponding to the head joint angles of the virtual object;
and determining the head skeleton animation corresponding to the target gesture according to the preset head skeleton animation.
Optionally, the second generating module 404 is further configured to:
and generating a viewpoint image under the first camera view angle based on the nerve radiation field under the condition that the first camera view angle is positioned in a preset directional bounding box.
Optionally, the apparatus further comprises:
the third generation module is used for generating the preset directed bounding box based on a second camera view angle corresponding to a preset image data set, wherein the preset image data set is a training data set of the multi-layer perception network model of the nerve radiation field;
the second acquisition module is used for acquiring a first camera position, a first focal length and a first image depth corresponding to the first camera view angle;
a second determining module, configured to determine a first boundary point of the first camera view angle in a horizontal direction and a second boundary point of the first camera view angle in a vertical direction according to the first focal length and the first image depth;
And the third determining module is used for determining that the first camera view angle is positioned in the preset directional bounding box when the target parts of the first camera position, the first boundary point and the second boundary point are positioned in the preset directional bounding box.
Optionally, the preset image dataset comprises a plurality of images; the third generating module is further configured to:
acquiring a second camera position, a second focal length and a second image depth corresponding to each image in the preset image data set;
determining a third boundary point of a camera view angle corresponding to a target image in a horizontal direction and a fourth boundary point of the camera view angle in a vertical direction according to a target second focal length and a target second image depth, wherein the target image is any one image in the preset image data set, and the target second focal length and the target second image depth are respectively a second focal length and a second image depth corresponding to the target image;
and generating the preset directed bounding box according to the second camera position, the third boundary point and the fourth boundary point corresponding to each image.
Optionally, the third determining module is further configured to:
constructing a first cone corresponding to the first camera view angle according to the first camera position, a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction;
And determining that the first camera view angle is positioned in a preset directional bounding box when the first cone is positioned in the preset directional bounding box entirely or when the preset cone part of the first cone is positioned in the preset directional bounding box.
Optionally, the first generating module 402 is further configured to:
acquiring a timestamp corresponding to the head joint angle of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is smaller than or equal to a preset time difference;
and generating the head skeleton animation of the virtual object based on a preset mixing space and the historical head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is larger than a preset time difference.
Optionally, the first obtaining module 401 is further configured to:
acquiring a first input;
a head joint angle of a virtual object is determined from the first input, the head joint angle including one or more of a roll angle, a pitch angle, and a yaw angle of a head joint of the virtual object.
Optionally, the nerve radiation field includes a multi-layer sensing network model, the multi-layer sensing network model is obtained based on training of a preset image data set, each image in the preset image data set includes a corresponding camera view angle, a camera internal reference and an image depth, the input of the multi-layer sensing network model is the camera view angle, and the output is a viewpoint image corresponding to the camera view angle.
In the embodiment of the application, the device can generate the head skeleton animation of the virtual object based on the preset mixing space and the head joint angle of the virtual object, so that the generated head skeleton animation is positioned in the preset mixing space, thereby ensuring the smoothness of the head skeleton animation, and further ensuring the smoothness of the finally generated viewpoint image.
The image generating apparatus 400 in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. For example, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a mobile internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), or the like, and the embodiments of the present application are not limited in particular.
The image generating apparatus 400 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image generating apparatus 400 provided in the embodiment of the present application can implement each process implemented by the embodiment of the method described in fig. 1, and in order to avoid repetition, a description is omitted here.
The embodiment of the application also provides electronic equipment. Referring to fig. 5, fig. 5 is a block diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, the electronic device includes: a processor 500, a memory 520, and a program or instructions stored on the memory 520 and executable on the processor 500, the processor 500 for reading the program or instructions in the memory 520; the electronic device further comprises a bus interface and a transceiver 510.
A transceiver 510 for receiving and transmitting data under the control of the processor 500.
Wherein in fig. 5, a bus architecture may comprise any number of interconnected buses and bridges, and in particular, one or more processors represented by processor 300 and various circuits of memory represented by memory 520, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The transceiver 510 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500, configured to read a program or instructions in the memory 520, performs the following steps:
acquiring a head joint angle of a virtual object, wherein a virtual camera is arranged at the head of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object;
determining a first camera view angle of the virtual camera according to the head bone animation;
a viewpoint image at the first camera perspective is generated based on the neuro-radiation field.
Optionally, the preset mixing space is a cubic space, and one vertex of the cubic space corresponds to a preset head skeleton animation and a preset head joint angle; the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
performing linear interpolation processing on the head joint angle of the virtual object to obtain the weight of the head joint angle of the virtual object at each vertex of the cube space;
fusing preset head joint angles corresponding to the vertexes of the cube space according to the weights of the vertexes to obtain target gestures corresponding to the head joint angles of the virtual object;
And determining the head skeleton animation corresponding to the target gesture according to the preset head skeleton animation.
Optionally, the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
and generating a viewpoint image under the first camera view angle based on the nerve radiation field under the condition that the first camera view angle is positioned in a preset directional bounding box.
Optionally, the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
generating the preset directed bounding box based on a second camera view angle corresponding to a preset image data set, wherein the preset image data set is a training data set of a multi-layer perception network model of the nerve radiation field;
acquiring a first camera position, a first focal length and a first image depth corresponding to the first camera view angle;
determining a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction according to the first focal length and the first image depth;
and determining that the first camera view angle is positioned in the preset directional bounding box under the condition that the target parts of the first camera position, the first boundary point and the second boundary point are positioned in the preset directional bounding box.
Optionally, the preset image dataset comprises a plurality of images; the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
acquiring a second camera position, a second focal length and a second image depth corresponding to each image in the preset image data set;
determining a third boundary point of a camera view angle corresponding to a target image in a horizontal direction and a fourth boundary point of the camera view angle in a vertical direction according to a target second focal length and a target second image depth, wherein the target image is any one image in the preset image data set, and the target second focal length and the target second image depth are respectively a second focal length and a second image depth corresponding to the target image;
and generating the preset directed bounding box according to the second camera position, the third boundary point and the fourth boundary point corresponding to each image.
Optionally, the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
constructing a first cone corresponding to the first camera view angle according to the first camera position, a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction;
And determining that the first camera view angle is positioned in a preset directional bounding box when the first cone is positioned in the preset directional bounding box entirely or when the preset cone part of the first cone is positioned in the preset directional bounding box.
Optionally, the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
acquiring a timestamp corresponding to the head joint angle of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is smaller than or equal to a preset time difference;
and generating the head skeleton animation of the virtual object based on a preset mixing space and the historical head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is larger than a preset time difference.
Optionally, the processor 500 is further configured to read the program or the instruction in the memory 520, and perform the following steps:
acquiring a first input;
a head joint angle of a virtual object is determined from the first input, the head joint angle including one or more of a roll angle, a pitch angle, and a yaw angle of a head joint of the virtual object.
Optionally, the nerve radiation field includes a multi-layer sensing network model, the multi-layer sensing network model is obtained based on training of a preset image data set, each image in the preset image data set includes a corresponding camera view angle, a camera internal reference and an image depth, the input of the multi-layer sensing network model is the camera view angle, and the output is a viewpoint image corresponding to the camera view angle.
In the embodiment of the application, the electronic equipment can generate the head skeleton animation of the virtual object based on the preset mixing space and the head joint angle of the virtual object, so that the generated head skeleton animation is positioned in the preset mixing space, thereby ensuring the smoothness of the head skeleton animation, and further ensuring the smoothness of the finally generated viewpoint image.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the method described in fig. 1, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or instructions, so as to implement each process of the embodiment of the method described in fig. 1, and achieve the same technical effects, so that repetition is avoided, and no further description is given here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. An image generation method, comprising:
acquiring a head joint angle of a virtual object, wherein a virtual camera is arranged at the head of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object;
determining a first camera view angle of the virtual camera according to the head bone animation;
a viewpoint image at the first camera perspective is generated based on the neuro-radiation field.
2. The method of claim 1, wherein the predetermined mixing space is a cube space, a vertex of the cube space corresponding to a predetermined head bone animation and a predetermined head joint angle;
the generating the head skeleton animation of the virtual object based on the preset mixing space and the head joint angle of the virtual object comprises the following steps:
performing linear interpolation processing on the head joint angle of the virtual object to obtain the weight of the head joint angle of the virtual object at each vertex of the cube space;
fusing preset head joint angles corresponding to the vertexes of the cube space according to the weights of the vertexes to obtain target gestures corresponding to the head joint angles of the virtual object;
And determining the head skeleton animation corresponding to the target gesture according to the preset head skeleton animation.
3. The method of claim 1, wherein the generating a viewpoint image at the first camera perspective based on the neuro-radiation field comprises:
and generating a viewpoint image under the first camera view angle based on the nerve radiation field under the condition that the first camera view angle is positioned in a preset directional bounding box.
4. The method of claim 3, wherein prior to generating the viewpoint image at the first camera view angle based on the neuro-radiation field, the method further comprises:
generating the preset directed bounding box based on a second camera view angle corresponding to a preset image data set, wherein the preset image data set is a training data set of a multi-layer perception network model of the nerve radiation field;
acquiring a first camera position, a first focal length and a first image depth corresponding to the first camera view angle;
determining a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction according to the first focal length and the first image depth;
and determining that the first camera view angle is positioned in the preset directional bounding box under the condition that the target parts of the first camera position, the first boundary point and the second boundary point are positioned in the preset directional bounding box.
5. The method of claim 4, wherein the preset image dataset comprises a plurality of images; the generating the preset directed bounding box based on the second camera view angle corresponding to the preset image data set includes:
acquiring a second camera position, a second focal length and a second image depth corresponding to each image in the preset image data set;
determining a third boundary point of a camera view angle corresponding to a target image in a horizontal direction and a fourth boundary point of the camera view angle in a vertical direction according to a target second focal length and a target second image depth, wherein the target image is any one image in the preset image data set, and the target second focal length and the target second image depth are respectively a second focal length and a second image depth corresponding to the target image;
and generating the preset directed bounding box according to the second camera position, the third boundary point and the fourth boundary point corresponding to each image.
6. The method of claim 4, wherein the determining that the first camera view angle is within the preset directional bounding box if the target portions of the first camera position, the first boundary point, and the second boundary point are within the preset directional bounding box comprises:
Constructing a first cone corresponding to the first camera view angle according to the first camera position, a first boundary point of the first camera view angle in the horizontal direction and a second boundary point of the first camera view angle in the vertical direction;
and determining that the first camera view angle is positioned in a preset directional bounding box when the first cone is positioned in the preset directional bounding box entirely or when the preset cone part of the first cone is positioned in the preset directional bounding box.
7. The method of any of claims 1-6, wherein the generating a head bone animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object comprises:
acquiring a timestamp corresponding to the head joint angle of the virtual object;
generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is smaller than or equal to a preset time difference;
and generating the head skeleton animation of the virtual object based on a preset mixing space and the historical head joint angle of the virtual object under the condition that the time difference between the current time information and the time stamp is larger than a preset time difference.
8. The method of any one of claims 1-6, wherein the obtaining the head joint angle of the virtual object comprises:
acquiring a first input;
a head joint angle of a virtual object is determined from the first input, the head joint angle including one or more of a roll angle, a pitch angle, and a yaw angle of a head joint of the virtual object.
9. The method of any of claims 1-6, wherein the neural radiation field comprises a multi-layer perceptual network model trained based on a preset image dataset, each image in the preset image dataset comprising a corresponding camera view angle, camera intrinsic parameters, and image depth, an input of the multi-layer perceptual network model being a camera view angle, an output being a viewpoint image corresponding to the camera view angle.
10. An image generating apparatus, comprising:
the first acquisition module is used for acquiring the head joint angle of the virtual object, and the head of the virtual object is provided with a virtual camera;
the first generation module is used for generating a head skeleton animation of the virtual object based on a preset mixing space and a head joint angle of the virtual object;
A first determining module for determining a first camera perspective of the virtual camera according to the head bone animation;
and the second generation module is used for generating a viewpoint image under the first camera view angle based on the nerve radiation field.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image generation method of any of claims 1-9.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image generation method according to any of claims 1-9.
CN202310833229.2A 2023-07-07 2023-07-07 Image generation method, device, electronic equipment and readable storage medium Pending CN116883553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310833229.2A CN116883553A (en) 2023-07-07 2023-07-07 Image generation method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310833229.2A CN116883553A (en) 2023-07-07 2023-07-07 Image generation method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116883553A true CN116883553A (en) 2023-10-13

Family

ID=88254183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310833229.2A Pending CN116883553A (en) 2023-07-07 2023-07-07 Image generation method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116883553A (en)

Similar Documents

Publication Publication Date Title
US11074755B2 (en) Method, device, terminal device and storage medium for realizing augmented reality image
US11282264B2 (en) Virtual reality content display method and apparatus
US11330172B2 (en) Panoramic image generating method and apparatus
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
EP4307233A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
CN111161395B (en) Facial expression tracking method and device and electronic equipment
CN109887003A (en) A kind of method and apparatus initialized for carrying out three-dimensional tracking
CN108227916A (en) For determining the method and apparatus of the point of interest in immersion content
KR102374404B1 (en) Device and method for providing content
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
WO2022022449A1 (en) Method and apparatus for spatial positioning
CN102804169A (en) Viewer-centric User Interface For Stereoscopic Cinema
CN109688343A (en) The implementation method and device of augmented reality studio
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
KR20200138349A (en) Image processing method and apparatus, electronic device, and storage medium
US11373329B2 (en) Method of generating 3-dimensional model data
CN116057577A (en) Map for augmented reality
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN109448117A (en) Image rendering method, device and electronic equipment
CN109978945B (en) Augmented reality information processing method and device
CN117218246A (en) Training method and device for image generation model, electronic equipment and storage medium
JP2024114712A (en) Imaging device, imaging method, and program
US11706395B2 (en) Apparatus and method for selecting camera providing input images to synthesize virtual view images
US20230290101A1 (en) Data processing method and apparatus, electronic device, and computer-readable storage medium
CN113066189A (en) Augmented reality equipment and virtual and real object shielding display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination