CN114827578A - Naked eye 3D implementation method and device and storage medium - Google Patents

Naked eye 3D implementation method and device and storage medium Download PDF

Info

Publication number
CN114827578A
CN114827578A CN202210563353.7A CN202210563353A CN114827578A CN 114827578 A CN114827578 A CN 114827578A CN 202210563353 A CN202210563353 A CN 202210563353A CN 114827578 A CN114827578 A CN 114827578A
Authority
CN
China
Prior art keywords
eye image
different directions
center point
image
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210563353.7A
Other languages
Chinese (zh)
Inventor
庞通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210563353.7A priority Critical patent/CN114827578A/en
Publication of CN114827578A publication Critical patent/CN114827578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a naked eye 3D implementation method, a naked eye 3D implementation device and a storage medium, and can provide a new technical scheme for the development of naked eye 3D, so that a real-time naked eye 3D effect is achieved on a plane screen. The naked eye 3D implementation method comprises the following steps: acquiring the positions of the viewpoints of the two eyes of the user on the screen; wherein the screen displays an overlay image comprising a left eye image and a right eye image; the center point of the left eye image and the center point of the right eye image are both positioned in the center of the screen; confirming a contrast area taking the viewpoint position as a center according to the viewpoint position; confirming the final distance of the left eye image and the right eye image which are translated along different directions simultaneously according to the pixel information in the contrast area when the left eye image and the right eye image are translated along different directions simultaneously; translating the left-eye image and the right-eye image simultaneously in different directions by the final distance.

Description

Naked eye 3D implementation method and device and storage medium
Technical Field
The application relates to the technical field of naked eye 3D display, in particular to a naked eye 3D realization method, a naked eye 3D realization device and a storage medium.
Background
With the development of electronic technology and communication technology, people pursue more and more image and video display, and are no longer satisfied with the traditional flat display mode. The 3D display then gradually enters the image, video, social networking and gaming domains.
The 3D display scheme which is common at present comprises two categories of 3D wearable equipment display and naked eye 3D display. Wherein, 3D wearing equipment can provide all-round immersive 3D effect, nevertheless has that equipment is expensive and heavy, can't wear for a long time comfortably, and time delay and vergence regulation contradiction scheduling problem arouse dizziness headache easily, and the player can't see the environment and there are drawbacks such as danger. Naked eye 3D shows and is difficult for visual fatigue because it need not to wear equipment, difficult vertigo and becomes a big technical direction of pursuit at present. The currently common naked-eye 3D technology mainly has the following scheme:
1. the light barrier 3D display technology utilizes the straight stripes which are distributed at intervals of light transmission and light non-transmission (black) to limit the light advancing direction, so that the contents seen by two eyes are different, and further, the image information generates a parallax effect;
2. the 3D display technology of the cylindrical lens is used for splitting light by changing the advancing direction of the light by utilizing the focusing and light refracting technology of the cylindrical lens so as to enable image information to generate a parallax effect;
3. the system projection technology is a technology for recording and reproducing a real three-dimensional image of an object by utilizing an interference principle;
4. the LED naked eye 3D large screen does not strictly calculate 3D display, and a three-dimensional effect is established in a two-dimensional picture by means of the distance, the size, the shadow effect and the perspective relation of an object.
The first two display screens are divided into a left eye group and a right eye group, so that the problems of low brightness and definition, high requirement on equipment, limited screen size and the like exist. The full-system projection technology has the problems that the field angle, the depth of field range and the resolution cannot be balanced simultaneously and the like.
Disclosure of Invention
The embodiment of the application provides a naked eye 3D implementation method, a naked eye 3D implementation device and a storage medium, and can provide a new technical scheme for the development of naked eye 3D, so that a real-time naked eye 3D effect is achieved on a plane screen.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a naked eye 3D implementation method is provided. The naked eye 3D implementation method comprises the following steps:
acquiring the positions of the viewpoints of the two eyes of the user on the screen; wherein the screen displays an overlay image comprising a left eye image and a right eye image; the center point of the left eye image and the center point of the right eye image are both positioned in the center of the screen;
confirming a contrast area taking the viewpoint position as a center according to the viewpoint position;
confirming the final distance of the left eye image and the right eye image which are translated along different directions simultaneously according to the pixel information in the contrast area when the left eye image and the right eye image are translated along different directions simultaneously; wherein the pixel information includes three primary color intensity values of the left-eye image pixel and the right-eye image pixel in the contrast region; when the left eye image and the right eye image are translated along different directions simultaneously, the center point of the left eye image and the center point of the right eye image are coincident or the center point of the left eye image is positioned on the left side of the center point of the right eye image, and after the left eye image and the right eye image are translated along different directions simultaneously by the final distance, the distance between the center point of the left eye image and the center point of the right eye image is smaller than or equal to the human eye distance;
translating the left-eye image and the right-eye image simultaneously in different directions by the final distance.
Optionally, the determining a final distance that the left-eye image and the right-eye image are simultaneously translated in different directions according to pixel information in the contrast area when the left-eye image and the right-eye image are simultaneously translated in different directions includes:
acquiring pixel information in the contrast area when the left eye image and the right eye image are translated along different directions by each unit distance;
calculating three primary color deviation values of the left-eye image pixel and the right-eye image pixel in the contrast area according to the pixel information;
when the left eye image and the right eye image are translated for n unit distances along different directions, confirming that the calculation result of the three primary color deviation value is smaller than a threshold value, and the final distance is the sum of the n unit distances; wherein n is a positive integer.
Optionally, the calculating, according to the pixel information, a three-primary-color deviation value of the left-eye image pixel and the right-eye image pixel in the contrast area includes:
the three primary color deviation value is obtained by calculating the following formula:
Figure BDA0003653881770000021
wherein b is a three primary color deviation value, c v1,i Intensity values of three primary colors c for the pixel i of the left eye image v1 in the contrast area Q; c. C v2,i Intensity values for the three primary colors c of the pixel i of the right eye image v2 in the contrast region Q; r is red, G is green, and B is blue.
Optionally, the method further comprises:
confirming the viewpoint position update during the step of performing the step of simultaneously translating the left-eye image and the right-eye image by the final distance in different directions;
and returning to the step of confirming the contrast area centered on the viewpoint position based on the updated viewpoint position.
Optionally, said translating said left eye image and said right eye image simultaneously in different directions by said final distance comprises:
and simultaneously translating the left eye image and the right eye image by the final distance along different directions based on the speed of the image converged by the eyeballs of the user.
Optionally, the acquiring the viewpoint positions of the two eyes of the user on the screen includes:
and acquiring the positions of the viewpoints of the two eyes of the user on the screen based on the eye tracker.
Optionally, the method further comprises:
acquiring the left eye image and the right eye image;
performing transparentization treatment on the left-eye image or the right-eye image at a preset ratio;
placing the transparent left eye image above the right eye image or placing the transparent right eye image above the left eye image, wherein the center point of the left eye image is coincident with the center point of the right eye image;
and displaying an overlapped image on a screen, wherein the central point of the left eye image and the central point of the right eye image in the overlapped image are both positioned in the center of the screen.
Optionally, the performing of the transparentization processing on the left-eye image at the preset ratio includes:
and carrying out 50% transparentization treatment on the left-eye image.
In a second aspect, an apparatus for realizing naked eye 3D is provided, including:
the acquisition module is used for acquiring the viewpoint positions of the eyes of the user on the screen; wherein the screen displays an overlapping image comprising a left-eye image and a right-eye image; the center point of the left eye image and the center point of the right eye image are both positioned in the center of the screen;
the first confirming module is used for confirming a contrast area taking the viewpoint position as the center according to the viewpoint position;
a second confirming module, configured to confirm a final distance that the left-eye image and the right-eye image simultaneously translate along different directions according to pixel information in the contrast area when the left-eye image and the right-eye image simultaneously translate along different directions; wherein the pixel information includes three primary color intensity values of the left-eye image pixel and the right-eye image pixel in the contrast region; when the left eye image and the right eye image are translated along different directions simultaneously, the center point of the left eye image and the center point of the right eye image are coincident or the center point of the left eye image is on the left of the center point of the right eye image, and after the left eye image and the right eye image are translated along different directions simultaneously by the final distance, the distance between the center point of the left eye image and the center point of the right eye image is smaller than or equal to the human eye distance;
a translation module to translate the left-eye image and the right-eye image in different directions simultaneously by the final distance.
In a third aspect, a computer-readable storage medium is provided, comprising: computer programs or instructions; when the computer program or the instructions runs on a computer, the computer is caused to execute the naked eye 3D implementation method according to any one of the possible implementation manners of the first aspect.
Based on the naked eye 3D realization method, the process that the left eye and the right eye converge to enable the images of the two eyes to be overlapped is finished by the electronic equipment, after the electronic equipment is processed, the effect of the overlapped images of the two eyes is directly displayed on the screen, the brain can see the image with the same effect as that seen in the real world through the screen, a new technical scheme can be provided for the development of naked eye 3D, and therefore the real-time naked eye 3D effect is realized on the plane screen.
The naked eye 3D implementation device and the storage medium belong to the same inventive concept as the naked eye 3D implementation method, so the device and the storage medium have the same beneficial effects and are not repeated herein.
Drawings
Fig. 1 is a schematic flow diagram of a naked eye 3D implementation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an overlay image displayed on a screen according to an embodiment of the present disclosure;
fig. 3 is a first schematic structural diagram of a naked eye 3D implementation apparatus provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram ii of the naked eye 3D implementation apparatus provided in the embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The following describes in more detail embodiments of the present invention with reference to the schematic drawings. Advantages and features of the present invention will become apparent from the following description and claims. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "left", "right", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
This application is intended to present various aspects, embodiments or features around a system that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Furthermore, a combination of these schemes may also be used.
In addition, in the embodiments of the present application, words such as "exemplarily", "for example", etc. are used for indicating as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term using examples is intended to present concepts in a concrete fashion.
In the embodiment of the present invention, "information", "signal", "message", "channel", "signaling" may be used in combination, and it should be noted that the meaning to be expressed is consistent when the difference is not emphasized. "of", "corresponding", and "corresponding" may sometimes be used in combination, it being noted that the intended meaning is consistent when no distinction is made.
The naked-eye 3D implementation method provided in the embodiment of the present application will be specifically described below with reference to fig. 1 to fig. 2.
Exemplarily, fig. 1 is a schematic flow diagram of a naked eye 3D implementation method provided in an embodiment of the present application. As shown in fig. 1, the naked eye 3D implementation method includes the following steps:
and S11, acquiring the positions of the two eyes of the user on the viewpoint of the screen.
In the embodiment of the application, the positions of the viewpoints of the two eyes of the user on the screen can be tracked through the eye tracking module. In particular, the information of the viewpoint located on the screen may be collected based on an eye tracker. The viewpoint information may include viewpoint position coordinates of both eyes of the user on the screen.
Optionally, the naked eye 3D implementation method provided in the embodiment of the present application may further include the following steps:
s101, a left eye image and a right eye image are acquired.
Wherein the left-eye image and the right-eye image are both 2D images. The left-eye image and the right-eye image correspond to images captured by a single eye when the left eye and the right eye simultaneously look at an object. The left-eye image and the right-eye image may be a photograph and a video source photographed by a 3D camera, or may be a film source or a 3D game algorithmically converted into 3D from a 2D photograph or a 2D video.
And S102, carrying out transparentization treatment on the left-eye image or the right-eye image at a preset ratio.
Specifically, the preset ratio may be 50%, that is, the left-eye image or the right-eye image is subjected to 50% transparency processing. Of course, in other embodiments, the preset ratio may also be other values, and the application is not limited in particular.
Fig. 2 is a schematic diagram of an overlapped image displayed on a screen provided in the embodiment of the present application, and as shown in fig. 2, the processed left-eye image is marked as v1, and a center point thereof is marked as p 1; the right eye image remains unchanged, labeled v2, with its center point labeled p 2.
S103, the processed left eye image is placed above the right eye image or the transparent processed right eye image is placed above the left eye image, and the center point of the left eye image is coincident with the center point of the right eye image.
The transparentized image is placed on top of another image which is not subjected to transparentization. That is, the transparentization-processed left-eye image v1 is placed above the non-transparentization-processed right-eye image v2 or the transparentization-processed right-eye image v2 is placed above the non-transparentization-processed left-eye image v1, and p1 and p2 are overlapped.
And S104, displaying an output overlapped image on a screen, wherein the center point of the left eye image and the center point of the right eye image in the overlapped image are both positioned in the center of the screen.
As shown in fig. 2, after acquiring the positions of the viewpoints pe of the user' S eyes on the screen, step S12 is performed.
And S12, confirming a contrast area with the viewpoint position as the center according to the viewpoint position.
Specifically, the contrast area may be set to a rectangular range or a circular range centered on the viewpoint. As shown in fig. 2, the contrast area Q is a rectangular range centered on the viewpoint Pe. Alternatively, the contrast area Q should be suitably small when set, which saves computational cost, and the contrast area Q conforms to the focus range of human eyes.
After the contrast area is confirmed, step S13 is performed.
S13, determining a final distance that the left-eye image and the right-eye image are simultaneously translated in different directions according to the pixel information in the contrast area when the left-eye image and the right-eye image are simultaneously translated in different directions.
Wherein the pixel information includes three primary color intensity values of the left-eye image pixel and the right-eye image pixel in the contrast region. When the left-eye image and the right-eye image are translated along different directions at the same time, the center point of the left-eye image and the center point of the right-eye image are coincident or the center point of the left-eye image is on the left of the center point of the right-eye image. And after the left eye image and the right eye image are translated in different directions by the final distance, the distance between the central point of the left eye image and the central point of the right eye image is smaller than or equal to the human eye distance.
That is, the center point of the left-eye image and the center point of the right-eye image are either in a coincident state, or the center point of the left-eye image is located on the left of the center point of the right-eye image, and the distance between the center points of the left-eye image and the right-eye image is smaller than or equal to the human eye distance.
The simultaneous translation of the left-eye image and the right-eye image in different directions includes a simultaneous outward translation of the left-eye image and the right-eye image and a simultaneous inward translation of the left-eye image and the right-eye image. The center points of the left and right eye images are coincided with the center of the screen when the viewpoint positions of the two eyes of the user on the screen are obtained for the first time, namely, the left eye image and the right eye image are completely overlapped at the initial position, and at the moment, the left eye image and the right eye image only need to be simultaneously translated outwards. However, in the initial position, if the center points of the left and right eye images do not coincide, both the left and right eye images need to be translated inward and outward.
Specifically, step S13: determining a final distance that the left-eye image and the right-eye image are simultaneously translated in different directions according to pixel information in the contrast area when the left-eye image and the right-eye image are simultaneously translated in different directions may include the following steps:
s131, acquiring pixel information in the contrast area when the left eye image and the right eye image are simultaneously translated along different directions by each unit distance.
And S132, calculating three primary color deviation values of the left-eye image pixel and the right-eye image pixel in the contrast area according to the pixel information.
Specifically, the three-primary-color deviation value is a sum of squares of differences of all pixel points in the contrast area with respect to three-primary-color values, and the lower the three-primary-color deviation value is, the higher the matching degree is. Wherein, the matching degree is the same degree of the images of the left eye image and the right eye image in the contrast area range. Since the focusing of the human eyes means that the left and right eye images are identical in the viewpoint position, only a person can see a clear image. That is, the higher the matching degree, the sharper the image can be seen.
The three primary color deviation value is obtained by calculating the following formula:
Figure BDA0003653881770000051
wherein b is a three primary color deviation value, c v1,i Intensity values of three primary colors c for the pixel i of the left eye image v1 in the contrast area Q; c. C v2,i Intensity values for the three primary colors c of the pixel i of the right eye image v2 in the contrast region Q; r is red, G is green, and B is blue.
And S133, when the left eye image and the right eye image are translated by n unit distances along different directions, determining that the calculation result of the three primary color deviation value is smaller than a threshold value, and the final distance is the sum of the n unit distances. Wherein n is a positive integer.
If the left-eye image and the right-eye image are completely overlapped at the initial position, at this time, the left-eye image and the right-eye image only need to be simultaneously translated outwards, for example, three primary color deviation values after the left-eye image and the right-eye image are translated outwards at the initial position by one unit distance are firstly calculated, then three primary color deviation values after the left-eye image and the right-eye image are translated outwards at the initial position by two unit distances are calculated until the three primary color deviation values after the left-eye image and the right-eye image are translated outwards at the initial position by n unit distances are smaller than a threshold value, and the sum of the n unit distances is smaller than or equal to half of the human eye distance.
If the left-eye image and the right-eye image are not completely overlapped at the initial position, at this time, the three primary color deviation value of the left-eye image and the right-eye image after the initial positions are shifted outward by one unit distance may be calculated first, and then the three primary color deviation value of the left-eye image and the right-eye image after the initial positions are shifted inward by one unit distance may be calculated. And then calculating three primary color deviation values of the left eye image and the right eye image after the left eye image and the right eye image are outwards translated for two unit distances at the initial positions, and then calculating three primary color deviation values of the left eye image and the right eye image after the left eye image and the right eye image are inwards translated for two unit distances at the initial positions until the three primary color deviation values of the left eye image and the right eye image after the left eye image and the right eye image are outwards translated for n unit distances at the initial positions are smaller than a threshold value or the three primary color deviation values of the left eye image and the right eye image after the left eye image and the right eye image are inwards translated for n unit distances at the initial positions are smaller than the threshold value, and the sum of the n unit distances is smaller than or equal to half of the human eye distance.
It should be noted that, in step S13 and step S131, the translation is mentioned, but the translation is mentioned in these steps only for calculating the three primary color deviation value, and the translation does not need to be presented on the screen in performing these steps, and only after calculating the final distance with the three primary color deviation value smaller than the threshold value, step S14 is performed to translate and present the left-eye image and the right-eye image on the screen.
S14, simultaneously translating the left-eye image and the right-eye image in different directions by the final distance.
For example, after the final distance that the left-eye image and the right-eye image need to be simultaneously shifted outward is confirmed, the three primary color deviation values of the left-eye image pixel and the right-eye image pixel in the contrast area can be smaller than a threshold value, and then the left-eye image and the right-eye image are shifted outward by the final distance.
As shown in fig. 2, after the final distance is confirmed, the left-eye image v1 and the right-eye image v2 are simultaneously shifted outward by the final distance starting from the positions of the left-eye image v1 and the right-eye image v2 at this time. Optionally, the left-eye image and the right-eye image are simultaneously translated in different directions by the final distance based on a speed at which the user's eyes converge the images.
Optionally, the naked eye 3D implementation method of the present application may further include the following steps:
s15, confirming the viewpoint position update during the step of simultaneously shifting the left-eye image and the right-eye image by the final distance in different directions.
S16, based on the updated viewpoint position, returning to the step of confirming the contrast area centered on the viewpoint position.
That is, the position of the viewpoint located on the screen is acquired in real time by the eye tracker. If the position of the viewpoint pe in fig. 2 is changed in the course of performing step S14, the panning is stopped, and the process returns to perform step S12 to confirm the contrast area centered on the changed viewpoint position from the changed viewpoint position.
Particularly, when the naked-eye 3D implementation method of the present application is applied to a 3D game, since the 3D game has model coordinates, the left and right eye images are different for modeling the game, but it is known where and what objects are in the left and right eye images, so that it is only necessary to move the same portion in the left and right images to the viewpoint at the viewpoint. Since the left and right displacements should be the same when moving, it is only necessary to find the same object parts at the left and right of the viewpoint, and the point with the same distance to the viewpoint is the point of the final distance of translation.
It should be noted that the reason why the human eye generates the 3D effect is that when the brain integrates different images obtained by the left and right eyes, the non-viewpoint portion has parallax, the brain feels the effect of displaying two different images in an overlapping manner, and when the viewpoint moves, the parallax and the focus point are constantly changed. On the other hand, since the left and right eyes obtain the same image, there is no parallax, and the depth relationship cannot be perceived from a physiological layer. The process of integrating images by the brain is the process of controlling the rotation of eyeballs and converging the sight centers of two eyes at the viewpoints. Because the eyeball is approximately a sphere, the acquired image is a spherical surface, the boundary range of different images acquired when the eyeball rotates is slightly different, and the position relation of an object in the different images is not changed. Therefore, the process of controlling the eyeball to rotate and converging the image at the viewpoint is completely equivalent to directly translating the two eye images to ensure that the viewpoint is superposed, and the image presented in the brain is also completely the same.
The application focuses on that the process of converging the left and right eyes to make the images of the two eyes coincide is completed by the electronic equipment. After the electronic equipment processes the images, the effect of overlapping the images of the two eyes is directly displayed on the screen, and the brain sees the images with the same effect as that seen in the real world through the screen.
When the viewpoint of the human eye changes on the screen, the eye tracker acquires viewpoint information in real time and transmits the viewpoint information to the electronic equipment. Because the refreshing of the human eye photosensitive cells needs a certain time, and the eye tracker is faster than the time, the electronic equipment can process images in real time and start to output the images before the human eye photosensitive cells sense the images which are automatically adjusted by human eyes.
Since the image processing is performed in advance in real time according to different viewpoints of users, the brain can obtain dynamic image effects with different parallaxes. When the viewpoint changes, the effect of eyeball change convergence is displayed on the screen in real time. Meanwhile, because objects in the left eye image and the right eye image at the non-viewpoint are not coincident, a fuzzy feeling can be generated, and the situation that the eyes see no focus at the non-viewpoint is consistent, so that the effect of eye adaptability adjustment can be generated.
Compared with the effect of actively changing focus in a video to generate the object relation far-near effect, the effect of the method is equivalent to that human eyes can randomly and autonomously find the far-near relation of the object in the image in the video playing process, and meanwhile, the method has the parallax effect.
The human eye visual cell sensitization refresh rate is only 15-60hz, and the sampling rate of the eye tracker can easily achieve 120hz or even higher. This is also what eye tracker users often feel, and have not seen by themselves, that the viewpoint fed back by the eye tracker has been displayed in advance to the real viewpoint. The eye tracker samples the time difference between the human eye cells and the sensed image, and the image processing and displaying process can be easily realized. This provides the technical basis for the implementation of the scheme of the present application.
The method is applied to photos and video sources shot by a 3D camera, 2D photos or 2D videos and 3D games through algorithm conversion of 3D film sources.
Based on the naked eye 3D implementation method shown in FIG. 1, the method has the following advantages:
first, this application is based on eye tracker, does not have extra requirement to the screen, also need not any wearing equipment. Any screen can realize naked eye 3D display effect by matching with an eye tracker, so that 3D display can be easily integrated into terminal equipment such as a computer or a mobile phone and the like of a user.
And secondly, the method has no extra requirement on the 3D film source, and can be realized only by common 3D film source or game modeling. The displayed object does not need to be subjected to omnibearing modeling or information acquisition, algorithm preprocessing is also not needed, and the existing 3D film source can be directly subjected to 3D display by using the scheme.
And thirdly, the requirement on computing power is not high, most common household computers or mobile phones can meet the requirement, and the method is easy to popularize quickly.
Compared with wearable equipment and a common grating light column naked eye display screen, the distance from human eyes to the common display screen is longer, the adjustment range of the eyeball focal length is smaller in the corresponding real situation, the conflict situation of convergence adjustment is greatly relieved, and eye fatigue and dizziness and headache are not easy to cause.
The naked-eye 3D implementation method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to fig. 2. The following describes in detail a naked eye 3D implementation apparatus for executing the naked eye 3D implementation method provided by the embodiment of the present application with reference to fig. 3 to 4.
Exemplarily, fig. 3 is a first structural schematic diagram of a naked eye 3D implementation apparatus provided in an embodiment of the present application. As shown in fig. 3, the naked-eye 3D implementation apparatus includes: an acquisition module 31, a first confirmation module 32, a second confirmation module 33 and a translation module 33. For ease of illustration, fig. 3 shows only the main components of the naked-eye 3D implementation.
The acquiring module 31 is configured to acquire viewpoint positions of both eyes of a user on a screen; wherein the screen displays an overlapping image comprising a left-eye image and a right-eye image; the center point of the left eye image and the center point of the right eye image are both located in the center of the screen.
The first confirming module 32 is configured to confirm the contrast area centered on the viewpoint position according to the viewpoint position.
The second confirming module 33 is configured to confirm a final distance that the left-eye image and the right-eye image are simultaneously translated in different directions according to pixel information in the contrast area when the left-eye image and the right-eye image are simultaneously translated in different directions; wherein the pixel information includes three primary color intensity values of the left-eye image pixel and the right-eye image pixel in the contrast region; when the left eye image and the right eye image are translated along different directions simultaneously, the center point of the left eye image and the center point of the right eye image are overlapped or the center point of the left eye image is positioned on the left side of the center point of the right eye image, and after the left eye image and the right eye image are translated along different directions simultaneously by the final distance, the distance between the center point of the left eye image and the center point of the right eye image is smaller than or equal to the distance between human eyes.
Translation module 34 is configured to simultaneously translate the left-eye image and the right-eye image in different directions by the final distance.
It should be noted that the naked-eye 3D implementation apparatus may be a terminal device or a network device, may also be a chip (system) or other component or assembly that may be disposed in the terminal device or the network device, and may also be an apparatus including the terminal device or the network device, which is not limited in this application.
In addition, for technical effects of the naked eye 3D implementation apparatus 800, reference may be made to technical effects of the naked eye 3D implementation method illustrated in fig. 1, and details are not repeated here.
Exemplarily, fig. 4 is a schematic structural diagram ii of the naked eye 3D implementation apparatus provided in the embodiment of the present application. The naked eye 3D implementation apparatus may be a terminal device or a network device, or may be a chip (system) or other component or assembly that may be disposed in the terminal device or the network device. As shown in fig. 4, the naked eye 3D implementation apparatus 400 may include a processor 401. Optionally, the naked eye 3D implementation apparatus 400 may further include a memory 402 and/or a transceiver 403. Wherein the processor 401 is coupled to the memory 402 and the transceiver 403, such as may be connected by a communication bus.
The following specifically describes each component of the naked-eye 3D implementation apparatus 400 with reference to fig. 4:
the processor 401 is a control center of the naked eye 3D implementation apparatus 400, and may be a single processor or a collective term for multiple processing elements. For example, the processor 401 is one or more Central Processing Units (CPUs), or may be an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
Alternatively, the processor 401 may perform various functions of the naked eye 3D implementation apparatus 400 by running or executing software programs stored in the memory 402 and calling data stored in the memory 402.
In particular implementations, processor 401 may include one or more CPUs such as CPU0 and CPU1 shown in fig. 4 as an example.
In a specific implementation, the naked-eye 3D implementing apparatus 400 may also include a plurality of processors, such as the processor 401 and the processor 404 shown in fig. 2, as an embodiment. Each of these processors may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores that process data (e.g., computer program instructions).
The memory 402 is configured to store a software program for executing the scheme of the present application, and is controlled by the processor 401 to execute the software program.
Alternatively, memory 402 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 402 may be integrated with the processor 401, or may be independent, and is coupled to the processor 401 through an interface circuit (not shown in fig. 4) of the naked-eye 3D implementation apparatus 400, which is not specifically limited in this embodiment of the present application.
A transceiver 403 for communication with other naked eye 3D enabled devices. For example, the naked eye 3D implementation apparatus 400 is a terminal device, and the transceiver 403 may be used to communicate with a network device or communicate with another terminal device. For another example, the naked eye 3D implementation apparatus 400 is a network device, and the transceiver 403 may be used to communicate with a terminal device or communicate with another network device.
Optionally, the transceiver 403 may include a receiver and a transmitter (not separately shown in fig. 4). Wherein the receiver is configured to perform a receiving function and the transmitter is configured to perform a transmitting function.
Optionally, the transceiver 403 may be integrated with the processor 401, or may be independent and coupled to the processor 401 through an interface circuit (not shown in fig. 4) of the naked-eye 3D implementation apparatus 400, which is not specifically limited in this embodiment of the present invention.
It should be noted that the structure of the naked-eye 3D realization device 400 shown in fig. 4 does not constitute a limitation of the naked-eye 3D realization device, and an actual naked-eye 3D realization device may include more or less components than those shown, or combine some components, or arrange different components.
In addition, for technical effects of the naked eye 3D implementation apparatus 400, reference may be made to the technical effects of the naked eye 3D implementation method described in the foregoing method embodiment, and details are not described here.
An embodiment of the present application further provides a chip system, including: a processor coupled to a memory for storing a program or instructions that, when executed by the processor, cause the system-on-chip to implement the method of any of the above method embodiments.
Optionally, the system on a chip may have one or more processors. The processor may be implemented by hardware or by software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory.
Optionally, the memory in the system-on-chip may also be one or more. The memory may be integrated with the processor or may be separate from the processor, which is not limited in this application. For example, the memory may be a non-transitory processor, such as a read only memory ROM, which may be integrated with the processor on the same chip or separately disposed on different chips, and the type of the memory and the arrangement of the memory and the processor are not particularly limited in this application.
The system-on-chip may be, for example, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other integrated chips.
It should be understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.
In the present application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A naked eye 3D implementation method is characterized by comprising the following steps:
acquiring the positions of the viewpoints of the two eyes of the user on the screen; wherein the screen displays an overlay image comprising a left eye image and a right eye image; the center point of the left eye image and the center point of the right eye image are both positioned in the center of the screen;
confirming a contrast area taking the viewpoint position as a center according to the viewpoint position;
confirming the final distance of the left eye image and the right eye image which are translated along different directions simultaneously according to the pixel information in the contrast area when the left eye image and the right eye image are translated along different directions simultaneously; wherein the pixel information includes three primary color intensity values of the left-eye image pixel and the right-eye image pixel in the contrast region; when the left eye image and the right eye image are translated along different directions simultaneously, the center point of the left eye image and the center point of the right eye image are coincident or the center point of the left eye image is positioned on the left side of the center point of the right eye image, and after the left eye image and the right eye image are translated along different directions simultaneously by the final distance, the distance between the center point of the left eye image and the center point of the right eye image is smaller than or equal to the human eye distance;
translating the left-eye image and the right-eye image simultaneously in different directions by the final distance.
2. The method of claim 1, wherein the determining a final distance that the left-eye image and the right-eye image are simultaneously translated in different directions based on pixel information within the contrasting area when the left-eye image and the right-eye image are simultaneously translated in different directions comprises:
acquiring pixel information in the contrast area when the left eye image and the right eye image are translated along different directions by each unit distance;
calculating three primary color deviation values of the left-eye image pixel and the right-eye image pixel in the contrast area according to the pixel information;
when the left eye image and the right eye image are translated for n unit distances along different directions, confirming that the calculation result of the three primary color deviation value is smaller than a threshold value, and the final distance is the sum of the n unit distances; wherein n is a positive integer.
3. The method of claim 2, wherein calculating three primary color deviation values of the left-eye image pixel and the right-eye image pixel in the contrast region according to the pixel information comprises:
the three primary color deviation value is obtained by calculating the following formula:
Figure FDA0003653881760000011
wherein b is a three primary color deviation value, c v1,i Intensity values of three primary colors c for the pixel i of the left eye image v1 in the contrast area Q; c. C v2,i Intensity values for the three primary colors c of the pixel i of the right eye image v2 in the contrast region Q; r is red, G is green, and B is blue.
4. The method according to any one of claims 1-3, further comprising:
confirming the viewpoint position update during the step of performing the step of simultaneously translating the left-eye image and the right-eye image by the final distance in different directions;
and returning to the step of confirming the contrast area centered on the viewpoint position based on the updated viewpoint position.
5. The method of claim 1, wherein said translating the left-eye image and the right-eye image simultaneously in different directions by the final distance comprises:
and simultaneously translating the left eye image and the right eye image in different directions by the final distance based on the speed of the image gathered by the eyeballs of the user.
6. The method of claim 1, wherein the obtaining the positions of the viewpoints of the two eyes of the user on the screen comprises:
and acquiring the positions of the viewpoints of the two eyes of the user on the screen based on the eye tracker.
7. The method of claim 1, further comprising:
acquiring the left eye image and the right eye image;
performing transparentization treatment on the left-eye image or the right-eye image at a preset ratio;
placing the transparent left-eye image above the right-eye image or placing the transparent right-eye image above the left-eye image, wherein the center point of the left-eye image is coincident with the center point of the right-eye image;
and displaying an overlapped image on a screen, wherein the central point of the left eye image and the central point of the right eye image in the overlapped image are both positioned in the center of the screen.
8. The method according to claim 7, wherein the transparentizing the left-eye image at a preset ratio comprises:
and carrying out 50% transparentization treatment on the left-eye image.
9. The utility model provides a bore hole 3D realizes device which characterized in that includes:
the acquisition module is used for acquiring the viewpoint positions of the eyes of the user on the screen; wherein the screen displays an overlay image comprising a left eye image and a right eye image; the center point of the left eye image and the center point of the right eye image are both positioned in the center of the screen;
the first confirming module is used for confirming a contrast area taking the viewpoint position as the center according to the viewpoint position;
a second confirming module, configured to confirm a final distance that the left-eye image and the right-eye image simultaneously translate along different directions according to pixel information in the contrast area when the left-eye image and the right-eye image simultaneously translate along different directions; wherein the pixel information includes three primary color intensity values of the left-eye image pixel and the right-eye image pixel in the contrast region; when the left eye image and the right eye image are translated along different directions simultaneously, the center point of the left eye image and the center point of the right eye image are coincident or the center point of the left eye image is positioned on the left side of the center point of the right eye image, and after the left eye image and the right eye image are translated along different directions simultaneously by the final distance, the distance between the center point of the left eye image and the center point of the right eye image is smaller than or equal to the human eye distance;
a translation module to translate the left-eye image and the right-eye image in different directions simultaneously by the final distance.
10. A computer-readable storage medium, comprising a computer program or instructions which, when run on a computer, cause the computer to perform the naked eye 3D implementation method of any one of claims 1 to 7.
CN202210563353.7A 2022-05-20 2022-05-20 Naked eye 3D implementation method and device and storage medium Pending CN114827578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210563353.7A CN114827578A (en) 2022-05-20 2022-05-20 Naked eye 3D implementation method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210563353.7A CN114827578A (en) 2022-05-20 2022-05-20 Naked eye 3D implementation method and device and storage medium

Publications (1)

Publication Number Publication Date
CN114827578A true CN114827578A (en) 2022-07-29

Family

ID=82516876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210563353.7A Pending CN114827578A (en) 2022-05-20 2022-05-20 Naked eye 3D implementation method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114827578A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120056885A1 (en) * 2010-09-08 2012-03-08 Namco Bandai Games Inc. Image generation system, image generation method, and information storage medium
CN107885325A (en) * 2017-10-23 2018-04-06 上海玮舟微电子科技有限公司 A kind of bore hole 3D display method and control system based on tracing of human eye
CN108234983A (en) * 2017-12-31 2018-06-29 深圳超多维科技有限公司 A kind of three-dimensional imaging processing method, device and electronic equipment
CN113891061A (en) * 2021-11-19 2022-01-04 深圳市易快来科技股份有限公司 Naked eye 3D display method and display equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120056885A1 (en) * 2010-09-08 2012-03-08 Namco Bandai Games Inc. Image generation system, image generation method, and information storage medium
CN107885325A (en) * 2017-10-23 2018-04-06 上海玮舟微电子科技有限公司 A kind of bore hole 3D display method and control system based on tracing of human eye
CN108234983A (en) * 2017-12-31 2018-06-29 深圳超多维科技有限公司 A kind of three-dimensional imaging processing method, device and electronic equipment
CN113891061A (en) * 2021-11-19 2022-01-04 深圳市易快来科技股份有限公司 Naked eye 3D display method and display equipment

Similar Documents

Publication Publication Date Title
WO2015180659A1 (en) Image processing method and image processing device
CN109661687A (en) Fixed range is virtual and augmented reality system and method
US9813693B1 (en) Accounting for perspective effects in images
JP6619871B2 (en) Shared reality content sharing
CN107810633A (en) Three-dimensional rendering system
CN103034330B (en) A kind of eye interaction method for video conference and system
US10885670B2 (en) Stereo vision measuring system and stereo vision measuring method
CN108881893A (en) Naked eye 3D display method, apparatus, equipment and medium based on tracing of human eye
US10885651B2 (en) Information processing method, wearable electronic device, and processing apparatus and system
US20220385880A1 (en) 2d digital image capture system and simulating 3d digital image and sequence
US20220078392A1 (en) 2d digital image capture system, frame speed, and simulating 3d digital image sequence
US20240155096A1 (en) 2d image capture system & display of 3d digital image
CN109978945B (en) Augmented reality information processing method and device
US20210392314A1 (en) Vehicle terrain capture system and display of 3d digital image and 3d sequence
Hong et al. Towards 3D television through fusion of kinect and integral-imaging concepts
CN114827578A (en) Naked eye 3D implementation method and device and storage medium
WO2020210937A1 (en) Systems and methods for interpolative three-dimensional imaging within the viewing zone of a display
US10277881B2 (en) Methods and devices for determining visual fatigue of three-dimensional image or video and computer readable storage medium
KR20190050737A (en) Display device and method
WO2022093376A1 (en) Vehicle terrain capture system and display of 3d digital image and 3d sequence
Hsu et al. HoloTube: a low-cost portable 360-degree interactive autostereoscopic display
CN116643648B (en) Three-dimensional scene matching interaction method, device, equipment and storage medium
Songnian et al. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1
CN108156442A (en) A kind of three-dimensional imaging processing method, device and electronic equipment
US20240137481A1 (en) Method And Apparatus For Generating Stereoscopic Display Contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination