CN109644259A - 3-dimensional image preprocess method, device and wear display equipment - Google Patents

3-dimensional image preprocess method, device and wear display equipment Download PDF

Info

Publication number
CN109644259A
CN109644259A CN201780050816.7A CN201780050816A CN109644259A CN 109644259 A CN109644259 A CN 109644259A CN 201780050816 A CN201780050816 A CN 201780050816A CN 109644259 A CN109644259 A CN 109644259A
Authority
CN
China
Prior art keywords
information
target
image
target image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780050816.7A
Other languages
Chinese (zh)
Inventor
黄政
赵越
谢俊
卢启栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Royole Technologies Co Ltd
Original Assignee
Shenzhen Royole Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Royole Technologies Co Ltd filed Critical Shenzhen Royole Technologies Co Ltd
Publication of CN109644259A publication Critical patent/CN109644259A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Abstract

A kind of 3-dimensional image preprocess method, device and wear display equipment, the 3-dimensional image preprocess method includes: the relative position information for obtaining the corresponding target object of each pixel in target image, and the relative position information includes distance and angle of the target object relative to camera lens;According to the color information of the relative position information of the corresponding target object of the pixel and the pixel, the threedimensional model of the target image is rebuild;Obtain the interpupillary distance information for watching the target user of the target image;The threedimensional model of the target image is projected as left eye two dimensional image and right eye two dimensional image again according to the interpupillary distance information.The 3-dimensional image preprocess method can reduce the distance perception deviation wearing display equipment and being presented to the user when showing 3-dimensional image.

Description

Three-dimensional image preprocessing method and device and head-mounted display equipment Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional image preprocessing method and device and a head-mounted display device.
Background
The shooting of the three-dimensional image needs to be synchronously carried out by adopting two cameras, and the shooting of the three-dimensional image is realized by simulating the interpupillary distance between two eyes of a human being between the two cameras. At present, when three-dimensional images are shot, generally, according to the relationship that the inter-axis distance (hereinafter referred to as dual-inter-axis distance) of the dual cameras is in direct proportion to the object distance, a smaller dual-inter-axis distance is adopted when a close shot is shot, a larger dual-inter-axis distance is adopted when a far shot is shot, and the specific adjustment range of the dual-inter-axis distance needs to be judged by a photographer according to experience. However, if the distance between the two machine axes is adjusted according to the experience of the photographer, the imaging quality of the three-dimensional image is inevitably affected by the experience of the photographer, and the stability of the imaging quality of the three-dimensional image cannot be ensured. Meanwhile, due to the difference between human individuals, the interpupillary distances of two eyes of different people may be different, so that when different people watch the same three-dimensional image, the perception experience is greatly different, and further the distance perception of the three-dimensional image to people is deviated, and the perception experience of a user is influenced.
Disclosure of Invention
In view of the foregoing problems in the prior art, embodiments of the present invention provide a three-dimensional image preprocessing method and apparatus, and a head-mounted display device, so as to reduce a distance sense deviation presented to a user when the head-mounted display device displays a three-dimensional image.
A three-dimensional image preprocessing method comprises the following steps:
acquiring relative position information of a target object corresponding to each pixel point in a target image and color information of the pixel points, wherein the relative position information comprises the distance and the angle of the target object relative to a camera lens;
reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
acquiring pupil distance information of a target user watching the target image;
and re-projecting the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the pupil distance information.
A three-dimensional image preprocessing device comprises:
the camera comprises a relative position acquisition unit, a color acquisition unit and a color acquisition unit, wherein the relative position acquisition unit is used for acquiring relative position information of a target object corresponding to each pixel point in a target image and color information of the pixel points, and the relative position information comprises the distance and the angle of the target object relative to a camera lens;
the three-dimensional model reconstruction unit is used for reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
the pupil distance information acquisition unit is used for acquiring pupil distance information of a target user watching the target image;
and the three-dimensional image projection unit is used for re-projecting the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information.
A head-mounted display device, comprising: the device comprises a processor, a memory, a distance detection module, a first display screen and a second display screen, wherein the memory is electrically connected with the processor and is used for storing a target image and an executable program code, and the processor is used for reading the target image and the executable program code from the memory and executing the following operations:
acquiring relative position information of a target object corresponding to each pixel point in a target image and color information of the pixel points, wherein the relative position information comprises the distance and the angle of the target object relative to a camera lens;
reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
the distance detection module is used for detecting the distance information between the first display screen and the second display screen and acquiring the interpupillary distance information of a target user watching the target image according to the distance information;
the processor is further configured to re-project the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information, and then display the left-eye two-dimensional image through the first display screen and display the right-eye two-dimensional image through the second display screen.
The three-dimensional image preprocessing method comprises the steps of reconstructing a three-dimensional model of a target image by acquiring relative position information of a target object corresponding to each pixel point in the target image and combining color information corresponding to the pixel point, and then re-projecting the three-dimensional model of the target image according to pupil distance information of different users when watching the target image, so that the projection effect corresponding to the pupil distance of the users can be obtained when the different users watch the target image, and the three-dimensional image impression experience of the users is improved.
Meanwhile, the target image can be subjected to three-dimensional model reconstruction and re-projection, so that when the target image is shot, the distance between the two machine axes does not need to be adjusted according to the distance of a target object, and the target image can be shot directly at the fixed distance between the two machine axes, so that the influence of the experience of a photographer can be avoided, and the shooting cost of the three-dimensional image can be reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below.
Fig. 1 is a first flowchart of a three-dimensional image preprocessing method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of step 101 shown in FIG. 1;
fig. 3 is a schematic diagram illustrating an imaging position relationship of a target image in the three-dimensional image preprocessing method according to the embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating program modules of a three-dimensional image preprocessing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a relative position obtaining unit of the three-dimensional image preprocessing device according to the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a three-dimensional model reconstruction unit of the three-dimensional image preprocessing apparatus according to the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a interpupillary distance information obtaining unit of the three-dimensional image preprocessing device according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of a three-dimensional image projection unit of a three-dimensional image preprocessing device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, in an embodiment of the present invention, a three-dimensional image preprocessing method is provided, which may be applied to a head-mounted display device (e.g., a 3D head-mounted theater, a virtual reality helmet, an augmented reality helmet, etc.) to preprocess and re-project a target three-dimensional image to be displayed according to interpupillary distance information of different users, so as to reduce a distance deviation presented to a user when the head-mounted display device displays the three-dimensional image, and improve a viewing experience of the user.
In this embodiment, the three-dimensional image preprocessing method at least includes the following steps:
step 101: acquiring relative position information of a target object corresponding to each pixel point in a target image and color information of the pixel points, wherein the relative position information comprises the distance and the angle of the target object relative to a camera lens;
step 102: reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
step 103: acquiring pupil distance information of a target user watching the target image;
step 104: and re-projecting the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the pupil distance information.
The target image is a three-dimensional image which is formed by synchronously shooting through two cameras in advance. In the process of shooting the target image, the inter-axis distance between the two cameras (hereinafter referred to as the inter-axis distance between the two cameras) may be a fixed value. For example, the distance between the two axes may be set to a typical value of 6.5 cm for the interpupillary distance of human eyes. Therefore, the imaging quality of the target image can be prevented from being influenced by the experience of a photographer, and the shooting cost of the three-dimensional image can be reduced.
Referring to fig. 2, in an embodiment, the acquiring the relative position information of the target object corresponding to each pixel point in the target image includes:
step 201: acquiring double-locomotive interaxial distance information of the target image during shooting and corresponding focal distance information of each pixel point, wherein the double-locomotive interaxial distance of the target image is kept fixed during shooting;
step 202: acquiring the imaging position of each pixel point in the target image and parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel point;
step 203: calculating the distance of a target object corresponding to each pixel point in the target image relative to a camera lens according to the double-machine-axis distance information, the focal length information and the parallax information;
step 204: and calculating the angle of the target object corresponding to each pixel point in the target image relative to the camera lens according to the imaging position of each pixel point in the target image and the focal length information.
The information of the distance between the two machine axes is the distance between the optical axes of the lenses of the two cameras for shooting the target image, namely the distance between the two machine axes. In this embodiment, the distance between the two shafts may be set to a fixed value, for example, 6.5 cm. Referring to fig. 3, 301 and 302 are a left camera and a right camera, respectively. P is the target object. S1 and S2 are the photosurfaces of the left and right cameras, respectively. f is the focal length of the lens of the left camera and the right camera. P1 and P2 are imaging points of the target object P on the photosurface S1 of the left camera and the photosurface S2 of the right camera, respectively. x is the number oflIs the distance, x, between the imaging point P1 and the lens optical axis of the left camera 301rThe distance between the imaging point P2 and the lens optical axis of the right camera 303. O islAnd OrThe lens optical center of the left camera 301 and the lens optical center of the right camera 303, respectively.
It is understood that the imaging position of each pixel in the target image may include the imaging position of the pixel on the photosensitive surface S1 and the imaging position of the pixel on the photosensitive surface S2. In the present embodiment, the imaging position of the pixel point on the photosensitive surface S1 is determined by the distance x between the imaging point P1 and the optical axis of the lens of the left camera 301lCharacterizing that the imaging position of the pixel point on the photosensitive surface S2 is determined by the distance x between the imaging point P2 and the optical axis of the lens of the right camera 303rAnd (5) characterizing. On the basis, the parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel point can be determined by the distance xlAnd a distance xrThe difference is determined.
It is to be understood that, for convenience of explanation of the positional relationship between the above elements, in the present embodiment, the light sensing surface S1 of the left camera 301 and the right camera are setThe photosurfaces S2 of the lenses 302 respectively surround the optical centers O of the lenseslAnd OrRotated 180 degrees as shown in fig. 3. Wherein, the optical axis of the lens of the left camera 301 is OlThe optical axis of the lens of the right camera 303 is OrIs measured. The target object P is away from the lens optical center O of the left camera 301lAnd the lens optical center O of the right camera 303rThe distance of the straight line is Z, i.e. the distance of the target object relative to the camera lens is Z. In the present embodiment, the lens optical axis of the left camera 301 and the lens optical axis of the right camera 303 are parallel to each other, and therefore, the lens optical center O of the left camera 301lAnd the lens optical center O of the right camera 303rThe distance T is the distance between the double machine shafts.
According to the principle of binocular range finding, there is a parallax d ═ x between an imaging point P1 of the target object P on the photosensitive surface S1 and an imaging point P2 on the photosensitive surface S2l-xr. Meanwhile, as can be seen from the positional relationship shown in fig. 3, the triangle formed by P, P1 and P2 is P, Ol、OrThe formed triangles are similar to each other. In this embodiment, the optical centers O of the left cameras are respectively setlAnd the lens optical center O of the right camerarA two-dimensional coordinate system is established for the origin, the positive and negative directions of the coordinate system are shown by arrows in fig. 3, and the coordinate of the imaging point P1 is (x)lF), the coordinate of the imaging point P2 is (x)rF). In the present embodiment, xlAnd xrIs a signed number, x when the imaging point is to the left of the optical axis of the lenslAnd xrNegative, x when the imaging point is to the right of the optical axis of the lenslAnd xrPositive values. Thus, in fig. 3, x in the coordinate of the imaging point P1lIs positive, i.e. the distance between the imaging point P1 and the optical axis of the lens of the left camera 301 is xlX in the coordinates of the imaging point P2rNegative, that is, the distance between the imaging point P2 and the optical axis of the lens of the right camera 303 is-xr. Thus, the parallax information of the corresponding pixel points in the target image on the left-eye two-dimensional image and the right-eye two-dimensional image, that is, the imaging point P1 of the target object P on the photosensitive surface S1 and the photosensitive surface S2, can be calculated according to the coordinates of the imaging pointsThe imaging point P2. Based on the similarity relationship of the triangles, the following equation can be obtained:
from the above equation (1), the following equation can be obtained:
that is, the distance of the target object from the camera lens is Z, the two-camera interaxial distance information T, the lens focal length f, and the imaging parallax d of the target object on the two light-sensing surfaces S1 and S2 is xl-xrAnd (4) correlating. Therefore, by obtaining parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to each pixel point in the target image, information of the distance between two axes of the target image during shooting, and information of the focal length corresponding to each pixel point, the distance Z of the target object corresponding to each pixel point in the target image relative to the camera lens can be calculated according to the above equation (2).
Meanwhile, the angle information of the target object P with respect to the optical axes of the lenses of the two cameras can also be obtained from the positional relationship shown in fig. 3. Specifically, assuming that the angle of the target object P with respect to the lens optical axis of the left camera is θ, from the positional relationship shown in fig. 3, the following angle calculation equation can be obtained:
that is, by acquiring the imaging position of each pixel in the target image and the focal length information, the angle θ of the target object corresponding to each pixel in the target image relative to the camera lens can be calculated according to the above equation (3).
In an embodiment, the reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point includes:
constructing a three-dimensional contour of the target image according to the relative position information of the target object corresponding to each pixel point;
and coloring the corresponding pixel points on the three-dimensional outline of the target image according to the color information of each pixel point to obtain a three-dimensional model of the target image.
Specifically, after the distance Z of the target object corresponding to each pixel point in the target image relative to the camera lens and the angle θ of the target object corresponding to each pixel point relative to the camera lens are calculated by using the method provided in the embodiment shown in fig. 3, the position of the point on the target object corresponding to each pixel point relative to the camera lens can be determined, and then the three-dimensional contour of the target image is constructed according to the points on the target object corresponding to all the pixel points. On this basis, in combination with color information of each pixel, such as RGB values, gray values, and the like, the corresponding pixel on the three-dimensional contour can be colored to obtain a three-dimensional model of the target image, thereby realizing reconstruction of the three-dimensional model of the target image.
In one embodiment, the acquiring information of interpupillary distance of a target user watching the target image includes:
detecting distance information of a first display screen and a second display screen in head-mounted display equipment for a target user to watch the target image;
and determining the interpupillary distance information of the target user according to the distance information of the first display screen and the second display screen.
Specifically, the user may view the target image through a head-mounted display device such as a 3D head-mounted theater, a virtual reality helmet, an augmented reality helmet, or the like. In this embodiment, the head-mounted display device may include a first display screen and a second display screen, and the distance between the first display screen and the second display screen is adjustable to adapt to the interpupillary distance of different users. Meanwhile, the head-mounted display device may further include a distance detection module for detecting a distance between the first display screen and the second display screen. When the user adjusts the distance between the first display screen and the second display screen, the distance between the first display screen and the second display screen can be detected through a distance detection module on the head-mounted display device.
It can be understood that, since the user adjusts the distance between the first display screen and the second display screen according to the self pupil distance to obtain the best viewing experience, that is, when the best viewing experience is obtained, the distance between the first display screen and the second display screen just matches with the pupil distance of the user. Therefore, by detecting the distance between the first display screen and the second display screen, the interpupillary distance information of the user can be determined. In this embodiment, the distance between the first display screen and the second display screen may be directly determined as the interpupillary distance of the target user.
In one embodiment, the re-projecting the three-dimensional model of the target imagery into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information comprises:
extracting a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame of image from the three-dimensional model of the target image according to the pupil distance information;
and projecting the left-eye two-dimensional image to a first display screen of the head-mounted display device, and projecting the right-eye two-dimensional image to a second display screen of the head-mounted display device.
It can be understood that according to the three-dimensional stereoscopic image display principle of the head-mounted display device, the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to each frame of image need to be extracted from the three-dimensional model of the target image and projected to the first display screen and the second display screen of the head-mounted display device respectively, so that a vivid three-dimensional display effect is realized. In this embodiment, the left-eye two-dimensional image and the right-eye two-dimensional image are extracted according to the interpupillary distance of the target user and then projected to the corresponding first display screen and the second display screen respectively, so that the three-dimensional image formed by the left-eye two-dimensional image and the right-eye two-dimensional image can be better matched with the interpupillary distance of the current target user, and a more real three-dimensional display effect is obtained.
Referring to fig. 4, in an embodiment of the present invention, a three-dimensional image preprocessing apparatus 400 is provided, including:
a relative position obtaining unit 410, configured to obtain relative position information of a target object corresponding to each pixel point in a target image, where the relative position information includes a distance and an angle of the target object relative to a camera lens;
a three-dimensional model reconstruction unit 430, configured to reconstruct a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
a pupil distance information obtaining unit 450, configured to obtain pupil distance information of a target user watching the target image;
and a three-dimensional image projection unit 470, configured to re-project the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information.
Referring to fig. 5, in one embodiment, the relative position obtaining unit 410 includes:
a distance information acquiring subunit 411, configured to acquire dual-chassis distance information of the target image during shooting and focal length information corresponding to each pixel point, where the dual-chassis distance of the target image during shooting is kept fixed;
a parallax information acquiring subunit 413, configured to acquire an imaging position of each pixel in the target image and parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel;
a target distance calculating subunit 415, configured to calculate, according to the dual-machine interaxial distance information, the focal length information, and the parallax information, a distance between a target object corresponding to each pixel point in the target image and a camera lens;
and an imaging angle calculation subunit 417, configured to calculate, according to the imaging position of each pixel in the target image and the focal length information, an angle of the target object corresponding to each pixel in the target image relative to the camera lens.
Referring to fig. 6, in an embodiment, the three-dimensional model reconstruction unit 430 includes:
a three-dimensional contour constructing subunit 431, configured to construct a three-dimensional contour of the target image according to the relative position information of the target object corresponding to each pixel point;
and the three-dimensional contour coloring subunit 433 is configured to color, according to the color information of each pixel point, a corresponding pixel point on the three-dimensional contour of the target image, so as to obtain a three-dimensional model of the target image.
Referring to fig. 7, in an embodiment, the interpupillary distance information acquiring unit 450 includes:
a screen space detection subunit 451, configured to detect space information between a first display screen and a second display screen in a head-mounted display device in which a target user views the target image;
and an interpupillary distance information determining subunit 453, configured to determine interpupillary distance information of the target user according to the distance information of the first display screen and the second display screen.
Referring to fig. 8, in an embodiment, the three-dimensional image projection unit 470 includes:
an image extraction subunit 471, configured to extract, from the three-dimensional model of the target image, a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame of image according to the interpupillary distance information;
an image projection subunit 473 is configured to project the left-eye two-dimensional image to a first display screen of the head-mounted display device, and project the right-eye two-dimensional image to a second display screen of the head-mounted display device.
It is to be understood that the functions and specific implementations of the units of the three-dimensional image preprocessing device 400 according to the embodiment of the present invention can also refer to the related descriptions in the method embodiments shown in fig. 1 to 3, which are not described herein again.
Referring to fig. 9, in an embodiment of the present invention, a head-mounted display apparatus 600 is provided, including: the system comprises a processor 610, a memory 630 electrically connected with the processor, a distance detection module 650, a first display screen 670 and a second display screen 690, wherein the memory 630 is used for storing a target image and executable program codes, and the processor 610 is used for reading the target image and the executable program codes from the memory 630 and executing the following operations:
acquiring relative position information of a target object corresponding to each pixel point in a target image, wherein the relative position information comprises the distance and the angle of the target object relative to a camera lens;
reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
the distance detection module is used for detecting the distance information between the first display screen and the second display screen and acquiring the interpupillary distance information of a target user watching the target image according to the distance information;
the processor is further configured to re-project the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information, and then display the left-eye two-dimensional image through the first display screen and display the right-eye two-dimensional image through the second display screen.
In one embodiment, the processor 610 is further configured to:
acquiring double-locomotive interaxial distance information of the target image during shooting and corresponding focal distance information of each pixel point, wherein the double-locomotive interaxial distance of the target image is kept fixed during shooting;
acquiring the imaging position of each pixel point in the target image and parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel point;
calculating the distance of a target object corresponding to each pixel point in the target image relative to a camera lens according to the double-machine-axis distance information, the focal length information and the parallax information;
and calculating the angle of the target object corresponding to each pixel point in the target image relative to the camera lens according to the imaging position of each pixel point in the target image and the focal length information.
In one embodiment, the processor 610 is further configured to:
constructing a three-dimensional contour of the target image according to the relative position information of the target object corresponding to each pixel point;
and coloring the corresponding pixel points on the three-dimensional outline of the target image according to the color information of each pixel point to obtain a three-dimensional model of the target image.
In one embodiment, the processor 610 is further configured to:
extracting a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame of image from the three-dimensional model of the target image according to the pupil distance information;
and projecting the left-eye two-dimensional image to a first display screen of the head-mounted display device, and projecting the right-eye two-dimensional image to a second display screen of the head-mounted display device.
It is to be understood that, in the embodiment of the present invention, steps of operations performed by the processor 610 and specific implementation thereof may also refer to related descriptions in the method embodiments shown in fig. 1 to fig. 3, and are not described herein again.
The three-dimensional image preprocessing method comprises the steps of reconstructing a three-dimensional model of a target image by acquiring relative position information of a target object corresponding to each pixel point in the target image and combining color information corresponding to the pixel point, and then re-projecting the three-dimensional model of the target image according to pupil distance information of different users when watching the target image, so that the projection effect corresponding to the pupil distance of the users can be obtained when the different users watch the target image, and the three-dimensional image impression experience of the users is improved. Meanwhile, the target image can be subjected to three-dimensional model reconstruction and re-projection, so that when the target image is shot, the distance between the two machine axes does not need to be adjusted according to the distance of a target object, and the target image can be shot directly at the fixed distance between the two machine axes, so that the influence of the experience of a photographer can be avoided, and the shooting cost of the three-dimensional image can be reduced.
It is to be understood that the elements and steps of the various examples described in connection with the embodiments of the invention may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described in a functional generic manner in the foregoing description for clarity of hardware and software interchangeability. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, the steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. Each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (14)

  1. A three-dimensional image preprocessing method is characterized by comprising the following steps:
    acquiring relative position information of a target object corresponding to each pixel point in a target image and color information of each pixel point, wherein the relative position information comprises the distance and the angle of the target object relative to a camera lens;
    reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
    acquiring pupil distance information of a target user watching the target image;
    and re-projecting the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the pupil distance information.
  2. The method of claim 1, wherein the obtaining of the relative position information of the target object corresponding to each pixel point in the target image comprises:
    acquiring double-locomotive interaxial distance information of the target image during shooting and corresponding focal distance information of each pixel point, wherein the double-locomotive interaxial distance of the target image is kept fixed during shooting;
    acquiring the imaging position of each pixel point in the target image and parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel point;
    calculating the distance of a target object corresponding to each pixel point in the target image relative to a camera lens according to the double-machine-axis distance information, the focal length information and the parallax information;
    and calculating the angle of the target object corresponding to each pixel point in the target image relative to the camera lens according to the imaging position of each pixel point in the target image and the focal length information.
  3. The method according to claim 1 or 2, wherein the reconstructing the three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point comprises:
    constructing a three-dimensional contour of the target image according to the relative position information of the target object corresponding to each pixel point;
    and coloring the corresponding pixel points on the three-dimensional outline of the target image according to the color information of each pixel point to obtain a three-dimensional model of the target image.
  4. The method of claim 1 or 2, wherein the obtaining of interpupillary distance information of a target user viewing the target imagery comprises:
    detecting distance information of a first display screen and a second display screen in head-mounted display equipment for a target user to watch the target image;
    and determining the interpupillary distance information of the target user according to the distance information of the first display screen and the second display screen.
  5. The method of claim 4, wherein said re-projecting the three-dimensional model of the target imagery into left and right eye two-dimensional images according to the interpupillary distance information comprises:
    extracting a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame of image from the three-dimensional model of the target image according to the pupil distance information;
    and projecting the left-eye two-dimensional image to a first display screen of the head-mounted display device, and projecting the right-eye two-dimensional image to a second display screen of the head-mounted display device.
  6. A three-dimensional image preprocessing device is characterized by comprising:
    the camera comprises a relative position acquisition unit, a processing unit and a processing unit, wherein the relative position acquisition unit is used for acquiring relative position information of a target object corresponding to each pixel point in a target image, and the relative position information comprises the distance and the angle of the target object relative to a camera lens;
    the three-dimensional model reconstruction unit is used for reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
    the pupil distance information acquisition unit is used for acquiring pupil distance information of a target user watching the target image;
    and the three-dimensional image projection unit is used for re-projecting the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information.
  7. The apparatus of claim 6, wherein the relative position acquisition unit comprises:
    the distance information acquisition subunit is used for acquiring the dual-locomotive interaxial distance information of the target image during shooting and the corresponding focal length information of each pixel point, wherein the dual-locomotive interaxial distance of the target image during shooting is kept fixed;
    the parallax information acquisition subunit is used for acquiring the imaging position of each pixel point in the target image and the parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel points;
    the target distance calculating subunit is configured to calculate, according to the dual-axis distance information, the focal length information, and the parallax information, a distance between a target object corresponding to each pixel point in the target image and a camera lens;
    and the imaging angle calculating subunit is used for calculating the angle of the target object corresponding to each pixel point in the target image relative to the camera lens according to the imaging position of each pixel point in the target image and the focal length information.
  8. The apparatus of claim 6 or 7, wherein the three-dimensional model reconstruction unit comprises:
    the three-dimensional contour constructing subunit is used for constructing a three-dimensional contour of the target image according to the relative position information of the target object corresponding to each pixel point;
    and the three-dimensional contour coloring subunit is used for coloring the corresponding pixel points on the three-dimensional contour of the target image according to the color information of each pixel point to obtain a three-dimensional model of the target image.
  9. The apparatus of claim 6 or 7, wherein the interpupillary distance information acquiring unit comprises:
    the screen space detection subunit is used for detecting space information of a first display screen and a second display screen in the head-mounted display equipment for the target user to watch the target image;
    and the interpupillary distance information determining subunit is used for determining the interpupillary distance information of the target user according to the distance information of the first display screen and the second display screen.
  10. The apparatus of claim 9, wherein the three-dimensional image projection unit comprises:
    the image extraction subunit is used for extracting a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame of image from the three-dimensional model of the target image according to the interpupillary distance information;
    and the image projection subunit is used for projecting the left-eye two-dimensional image to a first display screen of the head-mounted display device and projecting the right-eye two-dimensional image to a second display screen of the head-mounted display device.
  11. A head-mounted display device, comprising: the device comprises a processor, a memory, a distance detection module, a first display screen and a second display screen, wherein the memory is electrically connected with the processor and is used for storing a target image and an executable program code, and the processor is used for reading the target image and the executable program code from the memory and executing the following operations:
    acquiring relative position information of a target object corresponding to each pixel point in a target image and color information of each pixel point, wherein the relative position information comprises the distance and the angle of the target object relative to a camera lens;
    reconstructing a three-dimensional model of the target image according to the relative position information of the target object corresponding to the pixel point and the color information of the pixel point;
    the distance detection module is used for detecting the distance information between the first display screen and the second display screen and acquiring the interpupillary distance information of a target user watching the target image according to the distance information;
    the processor is further configured to re-project the three-dimensional model of the target image into a left-eye two-dimensional image and a right-eye two-dimensional image according to the interpupillary distance information, and then display the left-eye two-dimensional image through the first display screen and display the right-eye two-dimensional image through the second display screen.
  12. The head-mounted display device of claim 11, wherein the processor is further configured to:
    acquiring double-locomotive interaxial distance information of the target image during shooting and corresponding focal distance information of each pixel point, wherein the double-locomotive interaxial distance of the target image is kept fixed during shooting;
    acquiring the imaging position of each pixel point in the target image and parallax information of the left-eye two-dimensional image and the right-eye two-dimensional image corresponding to the pixel point;
    calculating the distance of a target object corresponding to each pixel point in the target image relative to a camera lens according to the double-machine-axis distance information, the focal length information and the parallax information;
    and calculating the angle of the target object corresponding to each pixel point in the target image relative to the camera lens according to the imaging position of each pixel point in the target image and the focal length information.
  13. The head-mounted display device of claim 11 or 12, wherein the processor is further configured to:
    constructing a three-dimensional contour of the target image according to the relative position information of the target object corresponding to each pixel point;
    and coloring the corresponding pixel points on the three-dimensional outline of the target image according to the color information of each pixel point to obtain a three-dimensional model of the target image.
  14. The head-mounted display device of claim 11 or 12, wherein the processor is further configured to:
    extracting a left-eye two-dimensional image and a right-eye two-dimensional image corresponding to each frame of image from the three-dimensional model of the target image according to the pupil distance information;
    and projecting the left-eye two-dimensional image to a first display screen of the head-mounted display device, and projecting the right-eye two-dimensional image to a second display screen of the head-mounted display device.
CN201780050816.7A 2017-06-21 2017-06-21 3-dimensional image preprocess method, device and wear display equipment Pending CN109644259A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/089362 WO2018232630A1 (en) 2017-06-21 2017-06-21 3d image preprocessing method, device and head-mounted display device

Publications (1)

Publication Number Publication Date
CN109644259A true CN109644259A (en) 2019-04-16

Family

ID=64737429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780050816.7A Pending CN109644259A (en) 2017-06-21 2017-06-21 3-dimensional image preprocess method, device and wear display equipment

Country Status (2)

Country Link
CN (1) CN109644259A (en)
WO (1) WO2018232630A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245096A (en) * 2021-12-08 2022-03-25 安徽新华传媒股份有限公司 Intelligent photographic 3D simulation imaging system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167688A (en) * 1992-11-30 1994-06-14 Sanyo Electric Co Ltd Stereoscopic color liquid crystal display device
CN103440036A (en) * 2013-08-23 2013-12-11 Tcl集团股份有限公司 Three-dimensional image display and interactive operation method and device
CN104333747A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN104918035A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Method and system for obtaining three-dimensional image of target
CN105611278A (en) * 2016-02-01 2016-05-25 欧洲电子有限公司 Image processing method and system for preventing naked eye 3D viewing dizziness and display device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103517060B (en) * 2013-09-03 2016-03-02 展讯通信(上海)有限公司 A kind of display control method of terminal equipment and device
US20150185484A1 (en) * 2013-12-30 2015-07-02 Electronics And Telecommunications Research Institute Pupil tracking apparatus and method
WO2016115874A1 (en) * 2015-01-21 2016-07-28 成都理想境界科技有限公司 Binocular ar head-mounted device capable of automatically adjusting depth of field and depth of field adjusting method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167688A (en) * 1992-11-30 1994-06-14 Sanyo Electric Co Ltd Stereoscopic color liquid crystal display device
CN103440036A (en) * 2013-08-23 2013-12-11 Tcl集团股份有限公司 Three-dimensional image display and interactive operation method and device
CN104333747A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN104918035A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Method and system for obtaining three-dimensional image of target
CN105611278A (en) * 2016-02-01 2016-05-25 欧洲电子有限公司 Image processing method and system for preventing naked eye 3D viewing dizziness and display device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245096A (en) * 2021-12-08 2022-03-25 安徽新华传媒股份有限公司 Intelligent photographic 3D simulation imaging system
CN114245096B (en) * 2021-12-08 2023-09-15 安徽新华传媒股份有限公司 Intelligent photographing 3D simulation imaging system

Also Published As

Publication number Publication date
WO2018232630A1 (en) 2018-12-27

Similar Documents

Publication Publication Date Title
US11223820B2 (en) Augmented reality displays with active alignment and corresponding methods
US11693242B2 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
US20200120329A1 (en) Augmented reality displays with active alignment and corresponding methods
CN109615703B (en) Augmented reality image display method, device and equipment
US10269139B2 (en) Computer program, head-mounted display device, and calibration method
WO2018188277A1 (en) Sight correction method and device, intelligent conference terminal and storage medium
CN111007939B (en) Virtual reality system space positioning method based on depth perception
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
TWI788739B (en) 3D display device, 3D image display method
US11956415B2 (en) Head mounted display apparatus
CN108282650B (en) Naked eye three-dimensional display method, device and system and storage medium
US20180143523A1 (en) Spherical omnipolar imaging
US20170257614A1 (en) Three-dimensional auto-focusing display method and system thereof
JP2022133133A (en) Generation device, generation method, system, and program
CN109644259A (en) 3-dimensional image preprocess method, device and wear display equipment
WO2021237952A1 (en) Augmented reality display system and method
US11182973B2 (en) Augmented reality display
CN214756700U (en) 3D display device
KR20160042694A (en) Alignment device for stereoscopic camera and method thereof
Combier et al. Towards an Augmented Reality Head Mounted Display System Providing Stereoscopic Wide Field of View for Indoor and Outdoor Environments with Interaction through the Gaze Direction
JP2022176559A (en) Spectacle type terminal, program and image display method
CN115334296A (en) Stereoscopic image display method and display device
CN108875711A (en) A method of generating the face signature of identification user or object
CN117710445A (en) Target positioning method and device applied to AR equipment and electronic equipment
SCURTU Stereo Vision, Multi-View Object and Scene Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Building 43, Dayun software Town, No. 8288 Longgang Avenue, Henggang street, Longgang District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Ruoyu Technology Co.,Ltd.

Address before: Building 43, Dayun software Town, No. 8288 Longgang Avenue, Henggang street, Longgang District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN ROYOLE TECHNOLOGIES Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20190416

RJ01 Rejection of invention patent application after publication