CN117974742A - Binocular image generation method, binocular image generation device, binocular image generation apparatus, binocular image generation storage medium, and binocular image generation program product - Google Patents
Binocular image generation method, binocular image generation device, binocular image generation apparatus, binocular image generation storage medium, and binocular image generation program product Download PDFInfo
- Publication number
- CN117974742A CN117974742A CN202211281237.2A CN202211281237A CN117974742A CN 117974742 A CN117974742 A CN 117974742A CN 202211281237 A CN202211281237 A CN 202211281237A CN 117974742 A CN117974742 A CN 117974742A
- Authority
- CN
- China
- Prior art keywords
- preset
- images
- image
- radiation field
- field model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000005855 radiation Effects 0.000 claims abstract description 79
- 210000005036 nerve Anatomy 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 27
- 230000001537 neural effect Effects 0.000 claims description 42
- 238000004590 computer program Methods 0.000 claims description 17
- 230000003190 augmentative effect Effects 0.000 claims description 10
- 230000003287 optical effect Effects 0.000 claims description 9
- 238000013473 artificial intelligence Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 24
- 238000012545 processing Methods 0.000 description 15
- 230000000007 visual effect Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 6
- 238000009877 rendering Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 239000011521 glass Substances 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present disclosure relates to a binocular image generation method, apparatus, device, storage medium, and program product. The method comprises the following steps: acquiring a plurality of images corresponding to a target scene, wherein the images are acquired from a plurality of angles for the target scene; training a nerve radiation field model by adopting the images to obtain a trained nerve radiation field model; generating a left eye image based on a preset left eye light line through the trained nerve radiation field model; and generating a right eye image based on the preset right eye light rays through the trained nerve radiation field model.
Description
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a method, apparatus, device, storage medium, and program product for generating a binocular image.
Background
By providing the viewer with the left-eye image and the right-eye image with appropriate parallax, a stereoscopic visual effect can be brought to the viewer. Wherein the left eye image represents an image provided for viewing by the left eye and the right eye image represents an image provided for viewing by the right eye. The viewing device may be VR (Virtual Reality) glasses, AR (Augmented Reality) glasses, or the like.
In the related art, a specially designed shooting device is required to obtain a left eye image and a right eye image, and some of the shooting devices even need to rely on a 3D (three-dimensional) sensor or need a precisely calibrated camera array, so that the hardware cost is high and the implementation difficulty is high.
Disclosure of Invention
The present disclosure provides a technical solution for generating binocular images.
According to an aspect of the present disclosure, there is provided a method for generating a binocular image, including:
acquiring a plurality of images corresponding to a target scene, wherein the images are acquired from a plurality of angles for the target scene;
training a nerve radiation field model by adopting the images to obtain a trained nerve radiation field model;
Generating a left eye image based on a preset left eye light line through the trained nerve radiation field model;
and generating a right eye image based on the preset right eye light rays through the trained nerve radiation field model.
The method comprises the steps of acquiring a plurality of images acquired from a plurality of angles for a target scene, training a nerve radiation field model by adopting the images to obtain a trained nerve radiation field model, generating a left eye image by the trained nerve radiation field model based on a preset left eye ray, and generating a right eye image by the trained nerve radiation field model based on a preset right eye ray, thereby generating a binocular image corresponding to the target scene by utilizing the images acquired from the angles for the target scene, reducing hardware cost, reducing acquisition difficulty of the binocular image, and generating an accurate and high-quality binocular image.
In one possible implementation manner, the plurality of images include panoramic information of the target scene, the left-eye image is a panoramic image corresponding to a left eye, and the right-eye image is a panoramic image corresponding to a right eye.
In this implementation, by employing a plurality of images including panoramic information of a target scene, a left-eye panoramic image and a right-eye panoramic image can be generated, so that a binocular panoramic image can be provided, that is, a panoramic image having a stereoscopic effect can be provided, so that a stereoscopic panoramic effect can be provided to a viewer using a viewing apparatus.
In one possible implementation, the arbitrary image information in any one of the plurality of images is at least contained in another one of the plurality of images.
In the implementation manner, the same visual information of the target scene is repeatedly acquired from different angles, so that the accuracy of three-dimensional reconstruction of the target scene can be improved, and the quality of the finally generated left-eye panoramic image and right-eye panoramic image can be improved.
In one possible implementation, the plurality of images are acquired by moving a single camera.
In this implementation, by acquiring a plurality of images corresponding to a target scene by using a single camera and generating a left-eye image and a right-eye image based on the plurality of images acquired thereby, the equipment cost for generating a binocular image can be reduced. In addition, in the mode of adopting a single camera, the problem of adjusting internal parameters of different cameras does not exist, and the operation complexity can be further reduced. In addition, in this implementation, the user only needs to take a plurality of images of the target scene from a plurality of angles, without having to operate a complicated apparatus.
In one possible implementation manner, the radius of the smallest circumcircle of the plurality of acquisition points corresponding to the plurality of images is within a preset radius interval; the image acquisition device comprises a plurality of images, a plurality of imaging units and a lens, wherein the acquisition point position corresponding to any one of the plurality of images represents the optical center position of the lens when the image is acquired; the left boundary of the preset radius interval is greater than 0.
In the implementation manner, the radius of the minimum circumcircle of the plurality of acquisition points corresponding to the plurality of images is controlled to be within the preset radius interval, so that the images with adjacent visual angles in the plurality of images have proper parallax, and the accurate and efficient three-dimensional reconstruction of the target scene is facilitated.
In one possible implementation manner, training the neural radiation field model by using the plurality of images to obtain a trained neural radiation field model includes:
Determining a plurality of camera external parameters corresponding to the plurality of images one by one based on the plurality of images;
And training a nerve radiation field model based on the images and the camera external parameters to obtain a trained nerve radiation field model.
By training the nerve radiation field model in the implementation mode, the nerve radiation field model can learn three-dimensional visual information of a target scene.
In one possible implementation of the present invention,
The preset left eye light is a ray, the end point of the preset left eye light is on a preset circle, the preset left eye light is on a tangent line of the preset circle, and the direction of the preset left eye light is clockwise of the preset circle;
the preset right eye ray is a ray, the end point of the preset right eye ray is on the preset circle, the preset right eye ray is on the tangent line of the preset circle, and the direction of the preset right eye ray is the anticlockwise direction of the preset circle;
The diameter of the preset circle is a preset interpupillary distance.
By adopting the implementation mode, the left eye image and the right eye image with proper parallax and high quality can be generated.
In one possible implementation, the method is applied to any one of a virtual reality device, an augmented reality device, a mixed reality device, an artificial intelligence device, and a digital twin system.
According to an aspect of the present disclosure, there is provided a binocular image generating apparatus including:
The acquisition module is used for acquiring a plurality of images corresponding to a target scene, wherein the images are acquired from a plurality of angles for the target scene;
The training module is used for training the nerve radiation field model by adopting the images to obtain a trained nerve radiation field model;
the first generation module is used for generating a left eye image based on a preset left eye light line through the trained nerve radiation field model;
and the second generation module is used for generating a right eye image based on the preset right eye light rays through the trained nerve radiation field model.
In one possible implementation manner, the plurality of images include panoramic information of the target scene, the left-eye image is a panoramic image corresponding to a left eye, and the right-eye image is a panoramic image corresponding to a right eye.
In one possible implementation, the arbitrary image information in any one of the plurality of images is at least contained in another one of the plurality of images.
In one possible implementation, the plurality of images are acquired by moving a single camera.
In one possible implementation manner, the radius of the smallest circumcircle of the plurality of acquisition points corresponding to the plurality of images is within a preset radius interval; the image acquisition device comprises a plurality of images, a plurality of imaging units and a lens, wherein the acquisition point position corresponding to any one of the plurality of images represents the optical center position of the lens when the image is acquired; the left boundary of the preset radius interval is greater than 0.
In one possible implementation, the training module is configured to:
Determining a plurality of camera external parameters corresponding to the plurality of images one by one based on the plurality of images;
And training a nerve radiation field model based on the images and the camera external parameters to obtain a trained nerve radiation field model.
In one possible implementation of the present invention,
The preset left eye light is a ray, the end point of the preset left eye light is on a preset circle, the preset left eye light is on a tangent line of the preset circle, and the direction of the preset left eye light is clockwise of the preset circle;
the preset right eye ray is a ray, the end point of the preset right eye ray is on the preset circle, the preset right eye ray is on the tangent line of the preset circle, and the direction of the preset right eye ray is the anticlockwise direction of the preset circle;
The diameter of the preset circle is a preset interpupillary distance.
In one possible implementation, the apparatus is applied to any one of a virtual reality device, an augmented reality device, a mixed reality device, an artificial intelligence device, and a digital twin system.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In the embodiment of the disclosure, a plurality of images acquired from a plurality of angles for the target scene are acquired, a neural radiation field model is trained by adopting the plurality of images, a trained neural radiation field model is obtained, a left eye image is generated based on a preset left eye ray through the trained neural radiation field model, and a right eye image is generated based on a preset right eye ray through the trained neural radiation field model, so that a binocular image corresponding to the target scene is generated by utilizing the plurality of images acquired from the plurality of angles for the target scene, the hardware cost can be reduced, the acquisition difficulty of the binocular image is reduced, and an accurate and high-quality binocular image can be generated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of a method for generating a binocular image provided by an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a collection manner of a plurality of images corresponding to a target scene in the binocular image generation method according to the embodiment of the present disclosure.
Fig. 3a shows a schematic diagram of a left-eye panoramic image generated by the binocular image generation method provided by the embodiments of the present disclosure.
Fig. 3b shows a schematic diagram of a right-eye panoramic image generated by the binocular image generation method provided by the embodiments of the present disclosure.
Fig. 4a is a schematic diagram of a preset left eye ray in a binocular image generating method according to an embodiment of the present disclosure.
Fig. 4b shows a schematic diagram of a preset right eye ray in the method for generating a binocular image according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a binocular image generating apparatus provided by an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The embodiment of the disclosure provides a binocular image generating method, a binocular image generating device, electronic equipment, a storage medium and a program product, which are used for training a neural radiation field model by acquiring a plurality of images acquired from a plurality of angles for a target scene, training the neural radiation field model by adopting the images to obtain a trained neural radiation field model, generating a left eye image by the trained neural radiation field model based on a preset left eye ray, and generating a right eye image by the trained neural radiation field model based on a preset right eye ray, thereby generating a binocular image corresponding to the target scene by utilizing the images acquired from the plurality of angles, reducing hardware cost, reducing acquisition difficulty of the binocular image, and generating an accurate and high-quality binocular image.
Illustratively, the method for generating a binocular image, the apparatus for generating a binocular image, the electronic device, the storage medium, and the program product according to the embodiments of the present disclosure may be applicable to application fields such as Virtual Reality technology (VR), augmented Reality technology (Augmented Reality, AR), mixed Reality technology (MR), artificial intelligence technology (e.g., artificial intelligence painting), and digital twinning. For example, the present invention can be applied to any one of a virtual reality device, an artificial intelligence device, an augmented reality device, a mixed reality device, and a digital twin system. It should be noted that the present disclosure is not limited to a specific application field or scenario.
The following describes in detail a binocular image generation method provided by an embodiment of the present disclosure with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a method for generating a binocular image provided by an embodiment of the present disclosure. In one possible implementation manner, the execution subject of the binocular image generation method may be a binocular image generation apparatus, and for example, the binocular image generation method may be executed by a terminal device or a server or other electronic devices. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle mounted device, or a wearable device. In some possible implementations, the method for generating the binocular image may be implemented by a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the binocular image generating method includes steps S11 to S14.
In step S11, a plurality of images corresponding to a target scene are acquired, where the plurality of images are acquired from a plurality of angles for the target scene.
In step S12, training a neural radiation field model using the plurality of images, to obtain a trained neural radiation field model.
In step S13, a left eye image is generated based on a preset left eye ray through the trained neural radiation field model.
In step S14, a right eye image is generated based on the preset right eye light rays through the trained neural radiation field model.
In the disclosed embodiments, the binocular image may include a left eye image and a right eye image, wherein the left eye image may represent an image provided for left eye viewing and the right eye image may represent an image provided for right eye viewing. The left-eye image and the right-eye image have parallax therebetween, and the left-eye image and the right-eye image may each be a two-dimensional image. Wherein the parallax may represent a difference in direction resulting from viewing the same object from two points having a certain distance. The angle between two points seen from the target may be referred to as the parallax angle between the two points. In some application scenarios, the binocular image may also be referred to as a 3D (3 Dimensions) image, a stereoscopic image, a binocular visual image, etc., without limitation herein.
The left eye image and the right eye image generated by the embodiment of the disclosure can be provided for a viewing device to be displayed so as to provide a stereoscopic visual effect for a viewer using the viewing device. Wherein the viewing device may be any device capable of providing a stereoscopic effect by exhibiting left and right eye images. For example, the viewing device may be VR (Virtual Reality) glasses, AR (Augmented Reality) glasses, or the like, without limitation.
In the embodiments of the present disclosure, the target scene may represent any scene for which a binocular image is to be generated. The plurality of images corresponding to the target scene may be images obtained by performing image acquisition and/or video acquisition on the target scene from a plurality of angles. For example, images of a target scene may be taken from multiple angles, resulting in multiple images. As another example, a video of a target scene may be taken from multiple angles and multiple images taken from the video.
In the embodiment of the present disclosure, the camera movement may be manually controlled by a user or may be mechanically controlled when the plurality of images are acquired from a plurality of angles, which is not limited herein. The mode of manually controlling the movement of the camera by the user not only does not need to operate complex equipment by the user, but also can reduce hardware cost, so that the realization difficulty of multi-angle image acquisition can be reduced, and the camera can be applied to wider scenes.
In one possible implementation, the plurality of images are acquired by moving a single camera. In this implementation, by acquiring a plurality of images corresponding to a target scene by using a single camera and generating a left-eye image and a right-eye image based on the plurality of images acquired thereby, the equipment cost for generating a binocular image can be reduced. In addition, in the mode of adopting a single camera, the problem of adjusting internal parameters of different cameras does not exist, and the operation complexity can be further reduced. In addition, in this implementation, the user only needs to take a plurality of images of the target scene from a plurality of angles, without having to operate a complicated apparatus.
In another possible implementation, the plurality of images may be acquired by moving at least two cameras. In this implementation, the internal parameters of the at least two cameras may be adjusted so that the internal parameters of the at least two cameras are identical or similar. After adjusting the internal parameters of the at least two cameras, the at least two cameras may be moved to capture multiple images of the target scene from multiple angles.
In one possible implementation manner, the radius of the smallest circumcircle of the plurality of acquisition points corresponding to the plurality of images is within a preset radius interval; the image acquisition device comprises a plurality of images, a plurality of imaging units and a lens, wherein the acquisition point position corresponding to any one of the plurality of images represents the optical center position of the lens when the image is acquired; the left boundary of the preset radius interval is greater than 0.
The principal axis of the lens has a specific point, which is called the optical center, where the direction of propagation of light passing through the point is unchanged. The center of the convex lens can be regarded approximately as the optical center. In general, the lens of a camera corresponds to a convex lens.
In this implementation manner, a minimum circumcircle of a plurality of projection points corresponding to the plurality of acquisition points may be used as the minimum circumcircle of the plurality of acquisition points. The plurality of acquisition points can be projected to the same horizontal plane, so that a plurality of projection points corresponding to the plurality of acquisition points one by one are obtained.
For example, the preset radius interval may be (50 cm,100 cm).
In this implementation, the range of camera movement may be determined according to a preset viewpoint and a preset radius interval, and the camera movement may be controlled within the range.
In the implementation manner, the radius of the minimum circumcircle of the plurality of acquisition points corresponding to the plurality of images is controlled to be within the preset radius interval, so that the images with adjacent visual angles in the plurality of images have proper parallax, and the accurate and efficient three-dimensional reconstruction of the target scene is facilitated.
In one possible implementation manner, the plurality of images include panoramic information of the target scene, the left-eye image is a panoramic image corresponding to a left eye, and the right-eye image is a panoramic image corresponding to a right eye. In this implementation, the plurality of images may contain 360 degrees of visual information of the target scene. In this implementation, the binocular image may be referred to as a binocular panoramic image, the left eye image may be referred to as a left eye panoramic image, and the right eye image may be referred to as a right eye panoramic image. In some application scenarios, the binocular panoramic image may also be referred to as a 3D panoramic image, a stereoscopic panoramic image, a panoramic stereoscopic image, a binocular stereoscopic panoramic image, etc., without limitation herein.
In this implementation, by employing a plurality of images including panoramic information of a target scene, a left-eye panoramic image and a right-eye panoramic image can be generated, so that a binocular panoramic image can be provided, that is, a panoramic image having a stereoscopic effect can be provided, so that a stereoscopic panoramic effect can be provided to a viewer using a viewing apparatus.
As an example of this implementation, any image information in any one of the plurality of images is at least contained in another one of the plurality of images. In this example, the same visual information of the target scene may be repeatedly acquired from different angles, so that the acquired different images contain the same visual information of the target scene. For example, each object in the target scene may be caused to be shot more than twice while the camera is being moved.
In this example, by repeatedly acquiring the same visual information of the target scene from different angles, the accuracy of three-dimensional reconstruction of the target scene can be improved, and the quality of the finally generated left-eye panoramic image and right-eye panoramic image can be improved.
As another example of this implementation, the image information in any one of the plurality of images may not be contained in another one of the plurality of images.
Fig. 2 is a schematic diagram illustrating a collection manner of a plurality of images corresponding to a target scene in the binocular image generation method according to the embodiment of the present disclosure. As shown in fig. 2, it is possible to move the camera 22 in the vicinity of the viewpoint 21 and control the camera 22 to photograph the target scene outward. The radius of the smallest circumcircle 23 of each acquisition point of the camera 22 can be between 50 cm and 1 meter. By controlling the camera 22 to take images at a plurality of different positions, the images thus taken can cover the image information of various angles other than the minimum circumscribed circle 23. By taking more images of the target scene from more angles, a more accurate three-dimensional reconstruction is facilitated, thereby facilitating the generation of higher resolution left and right eye images.
Fig. 3a shows a schematic diagram of a left-eye panoramic image generated by the binocular image generation method provided by the embodiments of the present disclosure. Fig. 3b shows a schematic diagram of a right-eye panoramic image generated by the binocular image generation method provided by the embodiments of the present disclosure. In fig. 3a and 3b, the left-eye panoramic image and the right-eye panoramic image are spherically projected panoramic images.
In embodiments of the present disclosure, a neural radiation field (Neural RADIANCE FIELDS, NERF) model may be trained using the plurality of images. In the training process of the neural radiation field model, parameters of the neural radiation field model may be updated by minimizing pixel differences between the known image (i.e., the plurality of images) and the image obtained by rendering. Wherein the neural radiation field model introduces a fully connected neural network into the three-dimensional representation of the scene. By employing the plurality of images as a supervision, the neural radiation field model may implicitly reconstruct the target scene in three dimensions, and then may generate a new angle two-dimensional image by rendering at a new angle. The neural radiation field model in the embodiments of the present disclosure may employ a neural radiation field algorithm or its modified algorithm, which is not limited herein.
In one possible implementation manner, training the neural radiation field model by using the plurality of images to obtain a trained neural radiation field model includes: determining a plurality of camera external parameters corresponding to the plurality of images one by one based on the plurality of images; and training a nerve radiation field model based on the images and the camera external parameters to obtain a trained nerve radiation field model.
In the implementation manner, a motion restoration structure (Structure From Motion, SFM) and other methods can be adopted, and multiple camera external parameters corresponding to the multiple images one by one are restored according to the multiple images. The camera external parameters corresponding to any image in the plurality of images can comprise rotation information and translation information. As an example of this implementation, the camera outlier corresponding to any image may be represented by an outlier matrix, the rotation information may be represented by a rotation matrix, and the translation information may be represented by a translation vector.
For any one of the plurality of images, three-dimensional coordinates x= (X, y, z) and two-dimensional viewing angle directions d= (θ, Φ) of spatial points corresponding to the image may be determined according to camera external parameters corresponding to the image. In the training phase of the neural radiation field model, the input of the neural radiation field model may include three-dimensional coordinates and two-dimensional viewing directions of spatial points corresponding to the image. By training the nerve radiation field model in the implementation mode, the nerve radiation field model can learn three-dimensional visual information of a target scene.
In the embodiment of the disclosure, the rendering of the left eye image and the right eye image is performed through the trained neural radiation field model. In the embodiment of the present disclosure, the execution order of step S13 and step S14 is not limited. For example, step S13 and step S14 may be performed simultaneously. As another example, step S13 may be performed first, and then step S14 may be performed. As another example, step S14 may be performed first, and then step S13 may be performed.
In one possible implementation manner, the preset left-eye ray is a ray, an end point of the preset left-eye ray is on a preset circle, the preset left-eye ray is on a tangent line of the preset circle, and a direction of the preset left-eye ray is clockwise of the preset circle; the preset right eye ray is a ray, the end point of the preset right eye ray is on the preset circle, the preset right eye ray is on the tangent line of the preset circle, and the direction of the preset right eye ray is the anticlockwise direction of the preset circle; the diameter of the preset circle is a preset interpupillary distance.
In this implementation, the preset interpupillary distance may have a value ranging from 5cm to 12cm. As an example of this implementation, the pupil distance of the target user may be taken as a preset pupil distance.
In this implementation, the diameter of the preset circle is equal to the preset interpupillary distance. In this implementation, the preset left-eye light and the preset right-eye light may be sampled according to the light sampling direction of the ODS (Omni-Directional Stereo, panoramic stereo). Fig. 4a is a schematic diagram of a preset left eye ray in a binocular image generating method according to an embodiment of the present disclosure. In fig. 4a, the rays with arrows represent the preset left eye rays. As shown in fig. 4a, the preset left-eye ray is a ray, the end point of the preset left-eye ray is on the preset circle, the preset left-eye ray is on the tangent line of the preset circle, and the direction of the preset left-eye ray is clockwise of the preset circle. Fig. 4b shows a schematic diagram of a preset right eye ray in the method for generating a binocular image according to an embodiment of the present disclosure. In fig. 4b, the rays with arrows represent the preset right eye rays. As shown in fig. 4b, the preset right-eye ray is a ray, the end point of the preset right-eye ray is on the preset circle, the preset right-eye ray is on the tangent line of the preset circle, and the direction of the preset right-eye ray is the anticlockwise direction of the preset circle.
For example, the panoramic image adopting spherical projection comprises n columns of pixels, each column of pixels represents 360/n degrees of light, then after the preset left eye light is determined, the n columns of pixels can be rendered by a column-by-column rendering mode to obtain the left eye panoramic image, and after the preset right eye light is determined, the n columns of pixels can be rendered by a column-by-column rendering mode to obtain the right eye panoramic image.
By adopting the implementation mode, the left eye image and the right eye image with proper parallax and high quality can be generated.
Of course, in other implementations, those skilled in the art may flexibly set the preset left-eye light direction and the preset right-eye light direction according to the actual application scene requirement and/or personal preference, which is not limited herein.
The binocular image generation method provided by the embodiment of the disclosure can be applied to the fields of virtual reality, augmented reality, three-dimensional reconstruction, computer vision, deep learning and the like, and is not limited herein.
The method for generating the binocular image provided by the embodiment of the present disclosure is described below through a specific application scenario. In the application scene, the image acquisition mode shown in fig. 2 can be adopted, and the target scene is acquired from different angles by moving a single camera, so that a plurality of images are obtained. The plurality of images may include 360-degree image information of the target scene, and any image information in any image of the plurality of images may be at least included in another image of the plurality of images. After the plurality of images are acquired, a plurality of camera external parameters corresponding to the plurality of images one by one can be determined based on the plurality of images, and a neural radiation field model can be trained based on the plurality of images and the plurality of camera external parameters. After the training of the neural radiation field model is completed, a left-eye panoramic image may be generated based on a preset left-eye ray as shown in fig. 4a through the trained neural radiation field model, and a right-eye panoramic image may be generated based on a preset right-eye ray as shown in fig. 4b through the trained neural radiation field model.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a binocular image generating apparatus, an electronic device, a computer readable storage medium, and a computer program product, where the foregoing may be used to implement any binocular image generating method provided by the disclosure, and the corresponding technical schemes and technical effects may be referred to corresponding records of the method section and are not repeated.
Fig. 5 shows a block diagram of a binocular image generating apparatus provided by an embodiment of the present disclosure. As shown in fig. 5, the binocular image generating apparatus includes:
an obtaining module 51, configured to obtain a plurality of images corresponding to a target scene, where the plurality of images are acquired from a plurality of angles for the target scene;
A training module 52, configured to train the neural radiation field model using the plurality of images, to obtain a trained neural radiation field model;
A first generation module 53, configured to generate a left eye image based on a preset left eye light line through the trained neural radiation field model;
a second generating module 54, configured to generate a right eye image based on the preset right eye light through the trained neural radiation field model.
In one possible implementation manner, the plurality of images include panoramic information of the target scene, the left-eye image is a panoramic image corresponding to a left eye, and the right-eye image is a panoramic image corresponding to a right eye.
In one possible implementation, the arbitrary image information in any one of the plurality of images is at least contained in another one of the plurality of images.
In one possible implementation, the plurality of images are acquired by moving a single camera.
In one possible implementation manner, the radius of the smallest circumcircle of the plurality of acquisition points corresponding to the plurality of images is within a preset radius interval; the image acquisition device comprises a plurality of images, a plurality of imaging units and a lens, wherein the acquisition point position corresponding to any one of the plurality of images represents the optical center position of the lens when the image is acquired; the left boundary of the preset radius interval is greater than 0.
In one possible implementation, the training module 52 is configured to:
Determining a plurality of camera external parameters corresponding to the plurality of images one by one based on the plurality of images;
And training a nerve radiation field model based on the images and the camera external parameters to obtain a trained nerve radiation field model.
In one possible implementation of the present invention,
The preset left eye light is a ray, the end point of the preset left eye light is on a preset circle, the preset left eye light is on a tangent line of the preset circle, and the direction of the preset left eye light is clockwise of the preset circle;
the preset right eye ray is a ray, the end point of the preset right eye ray is on the preset circle, the preset right eye ray is on the tangent line of the preset circle, and the direction of the preset right eye ray is the anticlockwise direction of the preset circle;
The diameter of the preset circle is a preset interpupillary distance.
In one possible implementation, the apparatus is applied to any one of a virtual reality device, an augmented reality device, a mixed reality device, an artificial intelligence device, and a digital twin system.
In the embodiment of the disclosure, a plurality of images acquired from a plurality of angles for the target scene are acquired, a neural radiation field model is trained by adopting the plurality of images, a trained neural radiation field model is obtained, a left eye image is generated based on a preset left eye ray through the trained neural radiation field model, and a right eye image is generated based on a preset right eye ray through the trained neural radiation field model, so that a binocular image corresponding to the target scene is generated by utilizing the plurality of images acquired from the plurality of angles for the target scene, the hardware cost can be reduced, the acquisition difficulty of the binocular image is reduced, and an accurate and high-quality binocular image can be generated.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation and technical effects of the functions or modules may refer to the descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. Wherein the computer readable storage medium may be a non-volatile computer readable storage medium or may be a volatile computer readable storage medium.
The disclosed embodiments also propose a computer program comprising computer readable code which, when run in an electronic device, causes a processor in the electronic device to carry out the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the above method.
The embodiment of the disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 6 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 6, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows Server TM), the apple Inc. promoted graphical user interface-based operating system (Mac OS X TM), the multi-user, multi-process computer operating system (Unix TM), the free and open source Unix-like operating system (Linux TM), the open source Unix-like operating system (FreeBSD TM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
If the technical scheme of the embodiment of the disclosure relates to personal information, the product applying the technical scheme of the embodiment of the disclosure clearly informs the personal information processing rule and obtains personal independent consent before processing the personal information. If the technical solution of the embodiment of the present disclosure relates to sensitive personal information, the product applying the technical solution of the embodiment of the present disclosure obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of "explicit consent". For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (12)
1. A method of generating a binocular image, comprising:
acquiring a plurality of images corresponding to a target scene, wherein the images are acquired from a plurality of angles for the target scene;
training a nerve radiation field model by adopting the images to obtain a trained nerve radiation field model;
Generating a left eye image based on a preset left eye light line through the trained nerve radiation field model;
and generating a right eye image based on the preset right eye light rays through the trained nerve radiation field model.
2. The method of claim 1, wherein the plurality of images includes panoramic information of the target scene, the left-eye image is a panoramic image corresponding to a left eye, and the right-eye image is a panoramic image corresponding to a right eye.
3. The method of claim 2, wherein any image information in any one of the plurality of images is contained at least in another one of the plurality of images.
4. The method of claim 1, wherein the plurality of images are acquired by moving a single camera.
5. The method of claim 1, wherein the radius of the smallest circumscribed circle of the plurality of acquisition points corresponding to the plurality of images is within a preset radius interval; the image acquisition device comprises a plurality of images, a plurality of imaging units and a lens, wherein the acquisition point position corresponding to any one of the plurality of images represents the optical center position of the lens when the image is acquired; the left boundary of the preset radius interval is greater than 0.
6. The method of any one of claims 1 to 5, wherein training a neural radiation field model using the plurality of images to obtain a trained neural radiation field model comprises:
Determining a plurality of camera external parameters corresponding to the plurality of images one by one based on the plurality of images;
And training a nerve radiation field model based on the images and the camera external parameters to obtain a trained nerve radiation field model.
7. The method according to any one of claims 1 to 5, wherein,
The preset left eye light is a ray, the end point of the preset left eye light is on a preset circle, the preset left eye light is on a tangent line of the preset circle, and the direction of the preset left eye light is clockwise of the preset circle;
the preset right eye ray is a ray, the end point of the preset right eye ray is on the preset circle, the preset right eye ray is on the tangent line of the preset circle, and the direction of the preset right eye ray is the anticlockwise direction of the preset circle;
The diameter of the preset circle is a preset interpupillary distance.
8. The method according to any one of claims 1 to 5, wherein the method is applied to any one of a virtual reality device, an augmented reality device, a mixed reality device, an artificial intelligence device, and a digital twin system.
9. A binocular image generating apparatus, comprising:
The acquisition module is used for acquiring a plurality of images corresponding to a target scene, wherein the images are acquired from a plurality of angles for the target scene;
The training module is used for training the nerve radiation field model by adopting the images to obtain a trained nerve radiation field model;
the first generation module is used for generating a left eye image based on a preset left eye light line through the trained nerve radiation field model;
and the second generation module is used for generating a right eye image based on the preset right eye light rays through the trained nerve radiation field model.
10. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
Wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any of claims 1 to 8.
11. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 8.
12. A computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211281237.2A CN117974742B (en) | 2022-10-19 | 2022-10-19 | Binocular image generation method, binocular image generation device, binocular image generation apparatus, binocular image generation storage medium, and binocular image generation program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211281237.2A CN117974742B (en) | 2022-10-19 | 2022-10-19 | Binocular image generation method, binocular image generation device, binocular image generation apparatus, binocular image generation storage medium, and binocular image generation program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117974742A true CN117974742A (en) | 2024-05-03 |
CN117974742B CN117974742B (en) | 2024-10-18 |
Family
ID=90863282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211281237.2A Active CN117974742B (en) | 2022-10-19 | 2022-10-19 | Binocular image generation method, binocular image generation device, binocular image generation apparatus, binocular image generation storage medium, and binocular image generation program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117974742B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300737A1 (en) * | 2011-02-08 | 2013-11-14 | Fujifilm Corporation | Stereoscopic image generating apparatus, stereoscopic image generating method, and stereoscopic image generating program |
CN107358626A (en) * | 2017-07-17 | 2017-11-17 | 清华大学深圳研究生院 | A kind of method that confrontation network calculations parallax is generated using condition |
CN108064447A (en) * | 2017-11-29 | 2018-05-22 | 深圳前海达闼云端智能科技有限公司 | Method for displaying image, intelligent glasses and storage medium |
CN113780258A (en) * | 2021-11-12 | 2021-12-10 | 清华大学 | Intelligent depth classification method and device for photoelectric calculation light field |
CN114742703A (en) * | 2022-03-11 | 2022-07-12 | 影石创新科技股份有限公司 | Method, device and equipment for generating binocular stereoscopic panoramic image and storage medium |
CN115170637A (en) * | 2022-07-08 | 2022-10-11 | 深圳市优必选科技股份有限公司 | Virtual visual angle image construction method and device, control equipment and readable storage medium |
US20240087214A1 (en) * | 2021-02-24 | 2024-03-14 | Google Llc | Color and infra-red three-dimensional reconstruction using implicit radiance functions |
-
2022
- 2022-10-19 CN CN202211281237.2A patent/CN117974742B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300737A1 (en) * | 2011-02-08 | 2013-11-14 | Fujifilm Corporation | Stereoscopic image generating apparatus, stereoscopic image generating method, and stereoscopic image generating program |
CN107358626A (en) * | 2017-07-17 | 2017-11-17 | 清华大学深圳研究生院 | A kind of method that confrontation network calculations parallax is generated using condition |
CN108064447A (en) * | 2017-11-29 | 2018-05-22 | 深圳前海达闼云端智能科技有限公司 | Method for displaying image, intelligent glasses and storage medium |
US20240087214A1 (en) * | 2021-02-24 | 2024-03-14 | Google Llc | Color and infra-red three-dimensional reconstruction using implicit radiance functions |
CN113780258A (en) * | 2021-11-12 | 2021-12-10 | 清华大学 | Intelligent depth classification method and device for photoelectric calculation light field |
US11450017B1 (en) * | 2021-11-12 | 2022-09-20 | Tsinghua University | Method and apparatus for intelligent light field 3D perception with optoelectronic computing |
CN114742703A (en) * | 2022-03-11 | 2022-07-12 | 影石创新科技股份有限公司 | Method, device and equipment for generating binocular stereoscopic panoramic image and storage medium |
CN115170637A (en) * | 2022-07-08 | 2022-10-11 | 深圳市优必选科技股份有限公司 | Virtual visual angle image construction method and device, control equipment and readable storage medium |
Non-Patent Citations (2)
Title |
---|
JINHUI FENG; SUMEI LI; YONGLI CHANG: "Binocular visual mechanism guided no-reference stereoscopic image quality assessment considering spatial saliency", 《 2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)》, 19 January 2022 (2022-01-19) * |
洪志国;王永滨;石民勇;: "基于双目立体视觉原理的立体视频自动生成软件开发", 中国传媒大学学报(自然科学版), no. 03, 30 June 2016 (2016-06-30) * |
Also Published As
Publication number | Publication date |
---|---|
CN117974742B (en) | 2024-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11100664B2 (en) | Depth-aware photo editing | |
CN110264509B (en) | Method, apparatus, and storage medium for determining pose of image capturing device | |
Takahashi et al. | From focal stack to tensor light-field display | |
US11663733B2 (en) | Depth determination for images captured with a moving camera and representing moving features | |
CN109741388B (en) | Method and apparatus for generating a binocular depth estimation model | |
KR20200049833A (en) | Depth estimation methods and apparatus, electronic devices, programs and media | |
CN112989904A (en) | Method for generating style image, method, device, equipment and medium for training model | |
CN115690382B (en) | Training method of deep learning model, and method and device for generating panorama | |
US12088779B2 (en) | Optical flow based omnidirectional stereo video processing method | |
CN111161398B (en) | Image generation method, device, equipment and storage medium | |
CN109495733B (en) | Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof | |
CN106228530A (en) | A kind of stereography method, device and stereophotography equipment | |
US20230106679A1 (en) | Image Processing Systems and Methods | |
CN114742703A (en) | Method, device and equipment for generating binocular stereoscopic panoramic image and storage medium | |
See et al. | Virtual reality 360 interactive panorama reproduction obstacles and issues | |
CN111818265B (en) | Interaction method and device based on augmented reality model, electronic equipment and medium | |
CN112802206A (en) | Roaming view generation method, device, equipment and storage medium | |
CN117974742B (en) | Binocular image generation method, binocular image generation device, binocular image generation apparatus, binocular image generation storage medium, and binocular image generation program product | |
CN110892706B (en) | Method for displaying content derived from light field data on a 2D display device | |
Tenze et al. | altiro3d: scene representation from single image and novel view synthesis | |
CN116309137A (en) | Multi-view image deblurring method, device and system and electronic medium | |
CN116630744A (en) | Image generation model training method, image generation device and medium | |
KR101788005B1 (en) | Method for generating multi-view image by using a plurality of mobile terminal | |
CN115660959B (en) | Image generation method and device, electronic equipment and storage medium | |
US12056887B2 (en) | Methods and systems for unsupervised depth estimation for fisheye cameras using spatial-temporal consistency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |