CN107659772B - 3D image generation method and device and electronic equipment - Google Patents

3D image generation method and device and electronic equipment Download PDF

Info

Publication number
CN107659772B
CN107659772B CN201710885264.3A CN201710885264A CN107659772B CN 107659772 B CN107659772 B CN 107659772B CN 201710885264 A CN201710885264 A CN 201710885264A CN 107659772 B CN107659772 B CN 107659772B
Authority
CN
China
Prior art keywords
image
eye
data matrix
user
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710885264.3A
Other languages
Chinese (zh)
Other versions
CN107659772A (en
Inventor
林敬顺
徐国融
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rongcheng goer Technology Co.,Ltd.
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201710885264.3A priority Critical patent/CN107659772B/en
Publication of CN107659772A publication Critical patent/CN107659772A/en
Application granted granted Critical
Publication of CN107659772B publication Critical patent/CN107659772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The embodiment of the invention provides a 3D image generation method, a device and electronic equipment, wherein the method comprises the following steps: firstly, a shooting device with a single camera acquires a data matrix of an original image corresponding to a current object watched by a user. Then, the angle formed when the left and right eye sight lines of the user focus on the viewing object is determined, and the angle is the parallax angle. This parallax angle can be considered as the basis for producing a 3D stereoscopic effect on the viewed image. Then, the data matrix of the original image is rotated clockwise and counterclockwise, respectively, to generate a right-eye 2D image and a left-eye 2D image corresponding to the rotated data matrix, respectively. Through the rotation process, a parallax angle exists between the generated left-eye 2D image and the right-eye 2D image. The two 2D images having the angle are synthesized, thereby generating a 3D image having a stereoscopic effect. The user can obtain a 3D image with a stereoscopic effect according to the method of the invention when using the shooting equipment with a single camera.

Description

3D image generation method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a 3D image generation method and apparatus, and an electronic device.
Background
The reason why a person can produce stereoscopic vision is that when the left and right eyes see an object from different angles, images formed on the retinas of the left and right eyes are different, and the brain of the person can determine the spatial position relationship of the object based on the difference in the images, thereby producing stereoscopic vision.
In the prior art, only a 3D camera is used to enable a user to see a 3D image. Most of the existing 3D cameras are of structures simulating human eyes and are composed of two cameras, so that the cost of the 3D cameras is high. In practical application, an image with a 3D effect cannot be shot by using a common camera, so that the common camera has a great limitation in the application of 3D images.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for generating a 3D image, and an electronic device, which are used to generate a 3D image by using a single camera.
The embodiment of the invention provides a 3D image generation method, which comprises the following steps:
acquiring a data matrix of an original image corresponding to a current object watched by a user;
determining angles formed when left and right eye sight lines of a user are focused on the currently viewed object;
respectively rotating the data matrix of the original image clockwise and anticlockwise according to the angle to generate a right eye 2D image and a left eye 2D image;
synthesizing the right-eye 2D image and the left-eye 2D image to generate a 3D image.
Optionally, the determining an angle formed when the left and right eye sight lines of the user are focused on the currently viewed object includes:
the angle is calculated according to the following formula: θ ═ 2 × tan-1((d/2)/u);
Wherein d is the pupil distance of the user and u is the object distance.
Optionally, the method further comprises:
calculating the object distance according to the following formula:
Figure BDA0001420041500000021
wherein f is a focal length, u is the object distance, v is an image distance, v is (((Y-X) t M) + f), X is the number of steps that the focusing motor moves when the photographing lens is focused to infinity, Y is the number of steps that the focusing motor moves when the photographing lens is focused to the photographed object, M is the magnification of the photographing lens, and t is the motion precision of the focusing motor.
Optionally, the method further comprises:
extracting an eye region in a pre-shot user face image;
and determining the distance between the central points of two eyeballs in the eye region as the pupil distance of the user.
Optionally, the rotating the data matrix of the original image clockwise and counterclockwise according to the angle to generate a right-eye 2D image and a left-eye 2D image respectively includes:
determining a position of any element in the data matrix of the original image in the data matrix of the right-eye 2D image according to the following formula to generate the data matrix of the right-eye 2D image: r ═ A × C1
Determining the position of any element in the data matrix of the original image in the data matrix of the left-eye 2D image according to the following formula to generate the data matrix of the left-eye 2D image: l ═ A × C2
Wherein A is the position of any element in the data matrix of the original image, C1、C2Respectively, clockwise rotating the corresponding image transformation matrix, counterclockwise rotating the corresponding image rotation matrix,
Figure BDA0001420041500000022
Figure BDA0001420041500000031
theta is an angle formed when left and right eye sight lines of a user are focused on the currently viewed object, W is the width of the original image, H is the height of the original image, W'1Is a preset width, H ', of the image after clockwise rotation'1Is a preset height, W ', of the image after clockwise rotation'2Is a preset width, H ', of the image after counter-clockwise rotation'2The preset height of the image after counterclockwise rotation;
and generating the right eye 2D image and the left eye 2D image according to the data matrix of the right eye 2D image and the data matrix of the left eye 2D image respectively.
An embodiment of the present invention provides a 3D image generation apparatus, including:
the acquisition module is used for acquiring a data matrix of an original image corresponding to a current object watched by a user;
the angle determining module is used for determining an angle formed when the left and right eye sight of the user focuses on the currently viewed object;
the rotation module is used for respectively performing clockwise rotation and anticlockwise rotation on the data matrix of the original image according to the angle so as to generate a right eye 2D image and a left eye 2D image;
a synthesizing module for synthesizing the right eye 2D image and the left eye 2D image to generate a 3D image.
An embodiment of the present invention provides an electronic device, including: a memory, and a processor coupled to the memory;
the memory to store one or more computer instructions, wherein the one or more computer instructions are for the processor to invoke for execution;
the processor is configured to execute the one or more computer instructions to perform any one of the above-described 3D image generation methods.
According to the 3D image generation method, the device and the electronic equipment provided by the embodiment of the invention, firstly, the shooting equipment with a single camera acquires the data matrix of the original image corresponding to the object currently watched by the user. Then, the angle formed when the left and right eye sight lines of the user focus on the viewing object is determined, and the angle is the parallax angle. This parallax angle can be considered as the basis for producing a 3D stereoscopic effect on the viewed image. Then, the data matrix of the original image is rotated clockwise and counterclockwise, respectively, to generate a right-eye 2D image and a left-eye 2D image corresponding to the rotated data matrix, respectively. Through the rotation process, a parallax angle exists between the generated left-eye 2D image and the right-eye 2D image. The two 2D images having the angle are synthesized, thereby generating a 3D image having a stereoscopic effect. The user can obtain a 3D image with a stereoscopic effect according to the method of the invention when using the shooting equipment with a single camera.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a first embodiment of a 3D image generation method according to the present invention;
fig. 2 is a flowchart of a second embodiment of a 3D image generation method according to the present invention;
fig. 3 is a schematic structural diagram of a first embodiment of a 3D image generating apparatus according to the present invention;
fig. 4 is a schematic structural diagram of a second embodiment of a 3D image generating apparatus according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe XXX in embodiments of the present invention, these XXX should not be limited to these terms. These terms are only used to distinguish XXX from each other. For example, a first XXX may also be referred to as a second XXX, and similarly, a second XXX may also be referred to as a first XXX, without departing from the scope of embodiments of the present invention.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
Fig. 1 is a flowchart of a first embodiment of a 3D image generation method according to an embodiment of the present invention, and an execution subject of the 3D image generation method according to the present embodiment may be a shooting device with a single camera, as shown in fig. 1, the method includes the following steps:
s101, acquiring a data matrix of an original image corresponding to a current object watched by a user.
S102, determining angles formed when the left and right eye sight lines of the user focus on the currently viewed object.
And S103, respectively rotating the data matrix of the original image clockwise and anticlockwise according to the angle to generate a right eye 2D image and a left eye 2D image.
And S104, synthesizing the right eye 2D image and the left eye 2D image to generate a 3D image.
Firstly, after photographing an object currently viewed by a user, a photographing device with a single camera may obtain an original image of the currently viewed object and a data matrix of the original image, and optionally, each element in the data matrix may be represented as a gray value of each pixel point in the original image.
Furthermore, since there is a certain distance between two eyes of a person, which can be regarded as the interpupillary distance, a certain angle is generated when the lines of sight of the left and right eyes are focused on the currently viewed object, the image viewed by the user is generated with a stereoscopic effect just because of the angle, and the size of the angle is generally related to the size of the interpupillary distance of the user. Generally, the interpupillary distance of an adult is between 58 and 72 mm. For convenience of the subsequent description, this angle may be referred to as a parallax angle. In practical application, optionally, a median value between 58 mm and 72mm, namely 65mm, can be selected, and the median value is determined as the interpupillary distance of the user.
After obtaining the pupil distance of the user, optionally, the parallax angle, which is the angle formed when the left and right eye viewpoints of the user are focused on the currently viewed object, can be calculated according to the following formula (1):
θ=2*tan-1((d/2)/u) (1)
wherein d is the pupil distance of the user and u is the object distance. The object distance u is the distance between the currently viewed object and the shooting lens of the shooting device. Alternatively, the object distance u can be calculated by the following equation (2):
Figure BDA0001420041500000061
where f is the focal length, u is the object distance, v is the image distance, and v is (((Y-X) × t × M) + f), X is the number of steps the focus motor moves when the taking lens is focused to infinity, Y is the number of steps the focus motor moves when the taking lens is focused to the object to be taken, M is the magnification of the taking lens, t is the motion accuracy of the focus motor, which indicates the distance the taking lens moves for each step the focus motor moves.
Among the various parameters involved in the above calculation process, X, M and t are all parameters inherent to the photographing lens and can be obtained directly, and Y varies according to the distance between the currently viewed object and the photographing lens, and can be obtained by the photographing device automatically calculating according to the number of steps of the movement of the focus motor after the user triggers the photographing operation.
Rotating the data matrix of the obtained original image by half of the parallax angle along the clockwise direction to form a right-eye 2D image; and rotating the obtained data matrix of the original image by half of the parallax angle along the counterclockwise direction to form a left-eye 2D image. Alternatively, the rotation process may be understood as a process of rotating the gray value of each pixel point in the data matrix of the original image. For example, after the rotation, the position of the gray level of a certain pixel in the data matrix will be rotated from the original X-th row and Y-th column to the X-th row and Y-th column. Rotating the gray value of each pixel point in the data matrix according to the above manner, namely completing the rotation of the original image data matrix. The data matrix after clockwise rotation is the data matrix of the right-eye 2D image, and the right-eye 2D image can be generated from the data matrix of the right-eye 2D image. The data matrix after counterclockwise rotation is the data matrix of the left-eye 2D image, and the left-eye 2D image may be generated from the data matrix of the left-eye 2D image.
After the rotation operation, a parallax angle may exist between the generated right-eye 2D image and left-eye 2D image. And finally, synthesizing the right-eye 2D image and the left-eye 2D image to generate a 3D image with a stereoscopic effect.
In this embodiment, first, a shooting device with a single camera acquires a data matrix of an original image corresponding to an object currently viewed by a user. Then, the angle formed when the left and right eye sight lines of the user focus on the viewing object is determined, and the angle is the parallax angle. This parallax angle can be considered as the basis for producing a 3D stereoscopic effect on the viewed image. Then, the data matrix of the original image is rotated clockwise and counterclockwise, respectively, to generate a right-eye 2D image and a left-eye 2D image corresponding to the rotated data matrix, respectively. Through the rotation process, a parallax angle exists between the generated left-eye 2D image and the right-eye 2D image. The two 2D images having the angle are synthesized, thereby generating a 3D image having a stereoscopic effect. The user can obtain a 3D image with a stereoscopic effect according to the method of the invention when using the shooting equipment with a single camera.
Fig. 2 is a flowchart of a second embodiment of a 3D image generation method according to an embodiment of the present invention, and as shown in fig. 2, the method includes the following steps:
s201, acquiring a data matrix of an original image corresponding to the object currently watched by the user.
The execution process of step S201 is similar to the corresponding steps in the foregoing embodiment, and reference may be made to the relevant description in the embodiment shown in fig. 1, which is not repeated herein.
S202, an eye region in the face image of the user photographed in advance is extracted.
And S203, determining the distance between the central points of the two eyeballs in the eye region as the pupil distance of the user.
The photographing apparatus may photograph an image of the face of the user in advance before the user photographs any of the currently viewed objects. Then, an eye region is obtained from the whole face image, and the position coordinates of the pupil in the preset coordinate system are further identified from the eye region. Optionally, the preset coordinate system has the central position of the eye region as a coordinate origin, the horizontal direction as an X axis, and the vertical direction as a Y axis. Therefore, the distance between the two pupils is calculated according to the position coordinates of the two pupils, and the distance is determined as the pupil distance of the user.
The above-described manner of determining the interpupillary distance of the user by analyzing the eye image of the user is more targeted than the method of using the average interpupillary distance as the interpupillary distance of the user according to the first embodiment. The interpupillary distance of a user using the shooting equipment is accurately determined, and a foundation can be laid for generating a 3D image with the best effect on the user.
It should be noted that although the above-described execution steps are performed after the data matrix of the original image is acquired, the present invention does not limit the specific execution time of the face image capturing, and the capturing operation may be performed before the step of determining the angle formed when the left and right eye views of the user are focused on the currently viewed object. Alternatively, this step can also be performed before the data matrix of the original image is acquired.
And S204, determining angles formed when the left and right eye sight lines of the user focus on the currently viewed object.
The execution process of step S204 is similar to the corresponding steps in the foregoing embodiment, and reference may be made to the relevant description in the embodiment shown in fig. 1, which is not repeated herein.
And S205, respectively rotating the data matrix of the original image clockwise and anticlockwise according to the angle to generate a right eye 2D image and a left eye 2D image.
The gray value of each pixel point in the data matrix of the original image can be rotated clockwise or counterclockwise in the following manner.
It is assumed that the position of the gray-level value of a certain pixel in the data matrix of the original image can be represented as (a ═ x, y, z), where the x value and the y value respectively represent the x-th row and the y-th column of the pixel in the data matrix of the original image, and the z value is only a value uniformly set for convenience of rotation calculation, and has no specific meaning, alternatively, in practical applications, the z value generally takes 1.
When the data matrix of the original image is rotated clockwise, the position of the gray value of the pixel point in the data matrix of the original image and the image conversion matrix C can be respectively compared1Multiplication, i.e.: r ═ A × C1And then obtaining the position of the gray value of the pixel point in the data matrix of the right-eye 2D image after clockwise rotation.
Wherein,
Figure BDA0001420041500000091
a is originalThe position of any element in the data matrix of the image in the data matrix of the original image, theta is a parallax angle which is an angle formed when the left and right eye sight lines of the user focus on a currently viewed object, W is the width of the original image, H is the height of the original image, and W'1Is a preset width, H ', of the image after clockwise rotation'1Is the preset height of the image after clockwise rotation.
And repeating the process to obtain the position of each pixel point gray value in the data matrix of the right-eye 2D image after rotation so as to form the data matrix of the right-eye 2D image, and generating the right-eye 2D image according to the data matrix of the right-eye 2D image.
When the data matrix of the original image is rotated counterclockwise, the position of the gray value of the pixel point in the data matrix of the original image and the image conversion matrix C can be respectively compared2Multiplication, i.e.: l ═ A × C2And then the position of the gray value of the pixel point in the data matrix of the left-eye 2D image after counterclockwise rotation is obtained.
Wherein,
Figure BDA0001420041500000092
a is the position of any element in the data matrix of the original image, theta is the angle formed when the left and right eye sight lines of the user focus on the currently viewed object, W is the width of the original image, H is the height of the original image, W'2Is a preset width, H ', of the image after counter-clockwise rotation'2Is the preset height of the image after counterclockwise rotation.
Repeating the above process to obtain the position of each pixel gray value in the left-eye 2D image data matrix after rotation so as to form a left-eye 2D image data matrix, and generating a left-eye 2D image according to the left-eye 2D image data matrix.
Alternatively, in practical applications, the width and height of the converted image may be a preset value, and the preset width and height are the same as those of the image before conversion, that is, H ═ H'1=H′2,W=W′1=W′2
S206, the right-eye 2D image and the left-eye 2D image are synthesized to generate a 3D image.
The execution process of step S206 is similar to the corresponding steps in the foregoing embodiments, and reference may be made to the relevant description in the embodiment shown in fig. 1, which is not repeated herein.
In this embodiment, the pupil distance of the user is determined by analyzing the eye area in the face image of the user currently using the photographing apparatus, and the pupil distance is not determined as a fixed average value. The method for determining the interpupillary distance is more accurate and has pertinence. After the interpupillary distance is determined, an angle formed when the left and right eye sight of the user focuses on the currently viewed object, namely a parallax angle, needs to be further calculated according to the interpupillary distance, the parallax angle is an important parameter for generating the 3D image, and whether the obtained parallax angle is accurate or not can directly influence the three-dimensional effect of the generated 3D image. And rotating the data matrix of the original image according to the accurate angle, and finally generating a 3D image with better three-dimensional effect.
Fig. 3 is a schematic structural diagram of a first embodiment of a 3D image generating device according to the present invention, and as shown in fig. 3, the 3D image generating device includes: the device comprises an acquisition module 11, an angle determination module 12, a rotation module 13 and a synthesis module 14.
The acquiring module 11 is configured to acquire a data matrix of an original image corresponding to an object currently viewed by a user.
And the angle determining module 12 is used for determining angles formed when the left and right eye sight lines of the user focus on the currently viewed object.
And a rotation module 13, configured to respectively rotate the data matrix of the original image clockwise and counterclockwise according to the angle, so as to generate a right-eye 2D image and a left-eye 2D image.
And a synthesizing module 14 for synthesizing the right eye 2D image and the left eye 2D image to generate a 3D image.
Optionally, the first determining module 12 in the 3D image generating apparatus is specifically configured to:
the angle is calculated according to the following formula: θ ═ 2 × tan-1((d/2)/u),
Wherein d is the pupil distance of the user and u is the object distance.
Optionally, the 3D image generation apparatus further includes: a calculation module 15.
The calculation module 15 is specifically configured to: the object distance is calculated according to the following formula:
Figure BDA0001420041500000111
where f is the focal length, u is the object distance, v is the image distance (((Y-X) × t × M) + f), X is the number of steps the focus motor moves when the taking lens is focused to infinity, Y is the number of steps the focus motor moves when the taking lens is focused to the object to be taken, M is the magnification of the taking lens, and t is the motion accuracy of the focus motor.
The apparatus shown in fig. 3 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Fig. 4 is a schematic structural diagram of a second embodiment of a 3D image generating device according to an embodiment of the present invention, and as shown in fig. 4, based on the embodiment shown in fig. 3, the 3D image generating device further includes: a shooting module 21 and a distance determining module 22.
And the shooting module 21 is used for extracting the eye region in the face image of the user shot in advance.
And a distance determining module 22, configured to determine a distance between center points of two eyeballs in the eye area as a pupil distance of the user.
Optionally, the rotation module 13 in the 3D image generation apparatus is specifically configured to:
determining the position of any element in the data matrix of the original image in the data matrix of the right-eye 2D image according to the following formula to generate the data matrix of the right-eye 2D image: r ═ A × C1
Determining the position of any element in the data matrix of the original image in the data matrix of the left-eye 2D image according to the following formula to generate the left-eye 2D imageThe data matrix of (2): l ═ A × C2
Wherein A is the position of any element in the data matrix of the original image, C1、C2Respectively, clockwise rotating the corresponding image transformation matrix, counterclockwise rotating the corresponding image rotation matrix,
Figure BDA0001420041500000121
Figure BDA0001420041500000122
theta is an angle formed when left and right eye sight lines of the user are focused on a currently viewed object, W is a width of the original image, H is a height of the original image, and W'1Is a preset width, H ', of the image after clockwise rotation'1Is a preset height, W ', of the image after clockwise rotation'2Is a preset width, H ', of the image after counter-clockwise rotation'2Is the preset height of the image after counterclockwise rotation.
And generating a right eye 2D image and a left eye 2D image according to the data matrix of the right eye 2D image and the data matrix of the left eye 2D image respectively.
The apparatus shown in fig. 4 can perform the method of the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
The internal functions and structure of the 3D image generation apparatus are described above, and in one possible design, the structure of the 3D image generation apparatus may be implemented as an electronic device, such as a single-camera. Fig. 5 is a schematic structural diagram of an electronic device according to a first embodiment of the present invention, and as shown in fig. 5, the electronic device includes: a memory 31, and a processor 32 connected to the memory, the memory 31 being used for storing a program for an electronic device to execute the 3D image generation method provided in any of the above embodiments, the processor 32 being configured for executing the program stored in the memory 31.
The program comprises one or more computer instructions which, when executed by the processor 32, are capable of performing the steps of:
acquiring a data matrix of an original image corresponding to a current object watched by a user;
determining angles formed when left and right eye sight lines of a user focus on a currently viewed object;
respectively rotating the data matrix of the original image clockwise and anticlockwise according to the angle to generate a right eye 2D image and a left eye 2D image;
the right-eye 2D image and the left-eye 2D image are synthesized to generate a 3D image.
Optionally, processor 32 is also configured to perform all or some of the method steps described above.
The electronic device may further include a communication interface 33 for communicating with other devices or a communication network.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above technical solutions may be embodied in the form of a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., which includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. A 3D image generation method, characterized by comprising:
acquiring a data matrix of an original image corresponding to an object currently watched by a user, wherein the data matrix of the original image is obtained by shooting the object currently watched by the user through a single camera;
extracting an eye region in a pre-shot user face image;
determining the distance between the central points of two eyeballs in the eye region as the pupil distance of the user;
calculating angles formed when the left and right eye sight lines of the user are focused on the currently viewed object according to the following formula: θ ═ 2 × tan-1((d/2)/u);
Wherein d is the pupil distance of the user, and u is the object distance;
respectively rotating the data matrix of the original image clockwise and anticlockwise according to the angle to generate a right eye 2D image and a left eye 2D image;
synthesizing the right-eye 2D image and the left-eye 2D image to generate a 3D image.
2. The method of claim 1, further comprising:
calculating the object distance according to the following formula:
Figure FDA0002558036570000011
Wherein f is a focal length, u is the object distance, v is an image distance, v is (((Y-X) t M) + f), X is the number of steps that the focusing motor moves when the photographing lens is focused to infinity, Y is the number of steps that the focusing motor moves when the photographing lens is focused to the photographed object, M is the magnification of the photographing lens, and t is the motion precision of the focusing motor.
3. The method according to claim 1, wherein the rotating the data matrix of the original image clockwise and counterclockwise according to the angle to generate a right eye 2D image and a left eye 2D image respectively comprises:
determining a position of any element in the data matrix of the original image in the data matrix of the right-eye 2D image according to the following formula to generate the data matrix of the right-eye 2D image: r ═ A × C1
Determining the position of any element in the data matrix of the original image in the data matrix of the left-eye 2D image according to the following formula to generate the data matrix of the left-eye 2D image: l ═ A × C2
Wherein A is the position of any element in the data matrix of the original image, C1、C2Respectively, clockwise rotating the corresponding image transformation matrix, counterclockwise rotating the corresponding image rotation matrix,
Figure FDA0002558036570000021
Figure FDA0002558036570000022
theta is an angle formed when the left and right eye sight lines of the user focus on the currently viewed object, W is the width of the original image, and H is that of the original imageHeight, W'1Is a preset width, H ', of the image after clockwise rotation'1Is a preset height, W ', of the image after clockwise rotation'2Is a preset width, H ', of the image after counter-clockwise rotation'2The preset height of the image after counterclockwise rotation;
and generating the right eye 2D image and the left eye 2D image according to the data matrix of the right eye 2D image and the data matrix of the left eye 2D image respectively.
4. A 3D image generation apparatus characterized by comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring a data matrix of an original image corresponding to an object currently watched by a user, and the data matrix of the original image is obtained by shooting the object currently watched by the user through a single camera;
the angle determining module is used for extracting an eye region in a face image of a user shot in advance; determining the distance between the central points of two eyeballs in the eye region as the pupil distance of the user; calculating angles formed when the left and right eye sight lines of the user are focused on the currently viewed object according to the following formula: θ ═ 2 × tan-1((d/2)/u); wherein d is the pupil distance of the user, and u is the object distance;
the rotation module is used for respectively performing clockwise rotation and anticlockwise rotation on the data matrix of the original image according to the angle so as to generate a right eye 2D image and a left eye 2D image;
a synthesizing module for synthesizing the right eye 2D image and the left eye 2D image to generate a 3D image.
5. An electronic device, comprising: a memory, and a processor coupled to the memory;
the memory to store one or more computer instructions, wherein the one or more computer instructions are for the processor to invoke for execution;
the processor to execute the one or more computer instructions to implement the method of any of claims 1-3.
CN201710885264.3A 2017-09-26 2017-09-26 3D image generation method and device and electronic equipment Active CN107659772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710885264.3A CN107659772B (en) 2017-09-26 2017-09-26 3D image generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710885264.3A CN107659772B (en) 2017-09-26 2017-09-26 3D image generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107659772A CN107659772A (en) 2018-02-02
CN107659772B true CN107659772B (en) 2020-10-09

Family

ID=61116060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710885264.3A Active CN107659772B (en) 2017-09-26 2017-09-26 3D image generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107659772B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200422A (en) * 2018-02-11 2018-06-22 潍坊歌尔电子有限公司 Generate method, equipment, single-lens photographic device and the electronic equipment of 3D rendering
CN109756723B (en) * 2018-12-14 2021-06-11 深圳前海达闼云端智能科技有限公司 Method and apparatus for acquiring image, storage medium and electronic device
CN113225480A (en) * 2021-04-30 2021-08-06 纵深视觉科技(南京)有限责任公司 Image acquisition method, image acquisition device, electronic equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685194A (en) * 2008-09-26 2010-03-31 鸿富锦精密工业(深圳)有限公司 Automatic focusing device and automatic focusing method
CN101718912A (en) * 2009-11-27 2010-06-02 北京工业大学 Digitalized detail visualizer of industrial X-ray negative with variable zooming ratio
CN101742350A (en) * 2008-11-21 2010-06-16 索尼株式会社 Image signal processing apparatus, image signal processing method and image projecting device
CN102005062A (en) * 2010-11-09 2011-04-06 福州瑞芯微电子有限公司 Method and device for producing three-dimensional image for three-dimensional stereo display
CN102063735A (en) * 2010-11-09 2011-05-18 福州瑞芯微电子有限公司 Method and device for manufacturing three-dimensional image source by changing viewpoint angles
CN102168954A (en) * 2011-01-14 2011-08-31 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
CN102868902A (en) * 2011-07-08 2013-01-09 宏碁股份有限公司 Three-dimensional image display device and method thereof
CN103327357A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Three-dimensional picture presenting method and device
CN105430331A (en) * 2015-11-13 2016-03-23 浙江宇视科技有限公司 Method and device for adjusting display direction of monitor image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5980097B2 (en) * 2012-11-07 2016-08-31 株式会社ジャパンディスプレイ Image display device and liquid crystal lens
TWI549478B (en) * 2014-09-04 2016-09-11 宏碁股份有限公司 Method for generating 3d image and electronic apparatus using the same

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685194A (en) * 2008-09-26 2010-03-31 鸿富锦精密工业(深圳)有限公司 Automatic focusing device and automatic focusing method
CN101742350A (en) * 2008-11-21 2010-06-16 索尼株式会社 Image signal processing apparatus, image signal processing method and image projecting device
CN101718912A (en) * 2009-11-27 2010-06-02 北京工业大学 Digitalized detail visualizer of industrial X-ray negative with variable zooming ratio
CN102005062A (en) * 2010-11-09 2011-04-06 福州瑞芯微电子有限公司 Method and device for producing three-dimensional image for three-dimensional stereo display
CN102063735A (en) * 2010-11-09 2011-05-18 福州瑞芯微电子有限公司 Method and device for manufacturing three-dimensional image source by changing viewpoint angles
CN102168954A (en) * 2011-01-14 2011-08-31 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
CN102868902A (en) * 2011-07-08 2013-01-09 宏碁股份有限公司 Three-dimensional image display device and method thereof
CN103327357A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Three-dimensional picture presenting method and device
CN105430331A (en) * 2015-11-13 2016-03-23 浙江宇视科技有限公司 Method and device for adjusting display direction of monitor image

Also Published As

Publication number Publication date
CN107659772A (en) 2018-02-02

Similar Documents

Publication Publication Date Title
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US8897502B2 (en) Calibration for stereoscopic capture system
US9600714B2 (en) Apparatus and method for calculating three dimensional (3D) positions of feature points
JP6862569B2 (en) Virtual ray tracing method and dynamic refocus display system for light field
JP2011232330A (en) Imaging apparatus, distance measuring method, and program
JP2008140271A (en) Interactive device and method thereof
CN107659772B (en) 3D image generation method and device and electronic equipment
US20140009570A1 (en) Systems and methods for capture and display of flex-focus panoramas
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN103959770A (en) Image processing device, image processing method and program
CN108282650B (en) Naked eye three-dimensional display method, device and system and storage medium
CN106228530A (en) A kind of stereography method, device and stereophotography equipment
CN113902781A (en) Three-dimensional face reconstruction method, device, equipment and medium
CN109978945B (en) Augmented reality information processing method and device
CN111292380B (en) Image processing method and device
CN111292234B (en) Panoramic image generation method and device
JP3054312B2 (en) Image processing apparatus and method
CN111462337B (en) Image processing method, device and computer readable storage medium
US11375107B2 (en) Apparatus and method for guiding multi-view capture
KR101893769B1 (en) Apparatus and method of generating 3 dimension object image model based on used view of viewer
KR20200019361A (en) Apparatus and method for three-dimensional face recognition
JP2019185283A (en) Three-dimensional model generation apparatus and program thereof, and IP stereoscopic image display system
US11348215B2 (en) Method and apparatus for reconstructing 4D image based on integral imaging
KR102151250B1 (en) Device and method for deriving object coordinate
JP2024062935A (en) Method and apparatus for generating stereoscopic display content - Patents.com

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211019

Address after: 264300 No. 699, Jiangjun South Road, Rongcheng City, Weihai City, Shandong Province

Patentee after: Rongcheng goer Technology Co.,Ltd.

Address before: 266104 Room 308, North Investment Street Service Center, Laoshan District, Qingdao, Shandong.

Patentee before: GOERTEK TECHNOLOGY Co.,Ltd.