CN109819231A - A kind of vision self-adapting naked eye 3D rendering processing method and processing device - Google Patents

A kind of vision self-adapting naked eye 3D rendering processing method and processing device Download PDF

Info

Publication number
CN109819231A
CN109819231A CN201910080968.2A CN201910080968A CN109819231A CN 109819231 A CN109819231 A CN 109819231A CN 201910080968 A CN201910080968 A CN 201910080968A CN 109819231 A CN109819231 A CN 109819231A
Authority
CN
China
Prior art keywords
information
observer
current observer
interpupillary distance
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910080968.2A
Other languages
Chinese (zh)
Inventor
李波
高波
牛德彬
张晓辰
崔向东
周林林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd
Original Assignee
DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd filed Critical DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd
Priority to CN201910080968.2A priority Critical patent/CN109819231A/en
Publication of CN109819231A publication Critical patent/CN109819231A/en
Pending legal-status Critical Current

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The present invention provides a kind of vision self-adapting naked eye 3D rendering processing method and processing device, method includes: to obtain the image information of current observer's face feature;The image information of current observer's face feature is analyzed, the practical interpupillary distance data information of current observer is generated;The data source of 3D rendering is adjusted according to the practical interpupillary distance data information of the current observer, generates the 3D rendering adaptable with the interpupillary distance of current observer.The beneficial effects of the present invention are: generating the 3D rendering adaptable with the interpupillary distance of current observer by the way that the data source of 3D rendering is adjusted according to the practical interpupillary distance data information of current observer.So that being attained by ideal 3D effect from more multi-angle of view viewing naked eye 3D video or image, user experience is improved.

Description

A kind of vision self-adapting naked eye 3D rendering processing method and processing device
Technical field
The present invention relates to technical field of image processing more particularly to a kind of vision self-adapting naked eye 3D rendering processing method and Device.
Background technique
With the rapid development of science and technology, naked eye 3D rendering display technology reaches its maturity, and naked eye 3D rendering display technology is Gradually infiltrate into daily life.In the prior art, during naked eye 3D rendering is shown, observer is at different angles When degree and different distances watch naked eye 3D rendering, different visual effects is had, therefore, is watched in unsuitable position naked When eye 3D rendering, can there is a situation where that visual effect is bad, influences user experience.
Summary of the invention
The technical problem to be solved by the present invention is in view of the deficiencies of the prior art, provide a kind of vision self-adapting naked eye 3D Image processing method and device.
The technical scheme to solve the above technical problems is that a kind of vision self-adapting naked eye 3D rendering processing side Method comprising:
Obtain the image information of current observer's face feature;
The image information of current observer's face feature is analyzed, the practical interpupillary distance number of current observer is generated It is believed that breath;
The data source of 3D rendering is adjusted according to the practical interpupillary distance data information of the current observer, generates and works as The adaptable 3D rendering of the interpupillary distance of preceding observer.
The beneficial effects of the present invention are: passing through the data according to the practical interpupillary distance data information of current observer to 3D rendering Source is adjusted, and generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video from more multi-angle of view Or image is attained by ideal 3D effect, improves user experience.
Based on the above technical solution, the present invention can also be improved as follows.
Further, the image information to current observer's face feature is analyzed, and generates current observation The step of practical interpupillary distance data information of person, comprising:
Obtain the left-eye image location information of observer and right eye position in the image information of current observer's face feature Confidence breath;
The left-eye image location information and the right eye position information are analyzed, observer's face feature is generated Image interpupillary distance data information;
According to the size factor detection information of the real space of acquisition to the image interpupillary distance number of observer's face feature It is believed that breath is converted, the practical interpupillary distance data information of the current observer under three-dimensional coordinate system is generated.
Beneficial effect using above-mentioned further scheme is: the size factor detection information for the real space that will acquire is to sight The image interpupillary distance data information of the person's of examining face feature is converted, and the reality of the current observer under three-dimensional coordinate system is generated Interpupillary distance data information, by being adjusted according to the practical interpupillary distance data information of the current observer to the data source of 3D rendering Section generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video or image all from more multi-angle of view Ideal 3D effect can be reached, improve user experience.
Further, the practical interpupillary distance data information according to the current observer carries out the data source of 3D rendering The step of adjusting, generating the 3D rendering adaptable with the interpupillary distance of current observer, comprising:
Acquire the first two-way video image information;
Real Time Compression, decoding and distortion correction are carried out to the two-way video image information, generate the second two-way video Image information;
According to the practical interpupillary distance data information of the current observer to the reality in the second two-way video image information When scene carry out depth calculation and synthesis, generate the 3D rendering adaptable with the interpupillary distance of current observer.
Beneficial effect using above-mentioned further scheme is: according to the practical interpupillary distance data information of current observer to second Real-time scene in two-way video image information carries out depth calculation and synthesis, generates and is adapted with the interpupillary distance of current observer 3D rendering.By the way that the data source of 3D rendering is adjusted according to the practical interpupillary distance data information of current observer, generate with The adaptable 3D rendering of the interpupillary distance of current observer.So that being attained by ideal from more multi-angle of view viewing naked eye 3D video or image 3D effect, improve user experience.
Further, the practical interpupillary distance data information according to the current observer is to the second two-way video shadow As the real-time scene progress depth calculation and synthesis in information, the 3D rendering adaptable with the interpupillary distance of current observer is generated Step, comprising:
Obtain the parallax angle information in the real-time scene in the second two-way video image information;
The parallax angle information is analyzed, disparity range information is generated;
The regulatory factor information of picture point correlation information and acquisition to the disparity range information, acquisition is divided Analysis, generates the maximum disparity angular motion modal constraint information of current observer;
It is neutral to the second two-way video image information according to the maximum disparity angular motion modal constraint information of the current observer The synthesis offset of body image pair is handled, and generates the 3D rendering adaptable with the interpupillary distance of current observer.
Using the beneficial effect of above-mentioned further scheme is: to the picture point correlation information of disparity range information, acquisition with And the regulatory factor information obtained is analyzed, and the maximum disparity angular motion modal constraint information of current observer is generated, and improves 3D figure The identification of picture reduces the distortion rate of 3D rendering, improves user experience.
Further, the picture point correlation information is obtained by following methods:
Obtain any point of the real-time scene in multiple second two-way video image informations;
Any point of real-time scene in multiple second two-way video image informations is analyzed, picture point is generated Correlation information.
Beneficial effect using above-mentioned further scheme is: to the real-time scene in multiple second two-way video image informations Any point analyzed, generate picture point correlation information, by obtain picture point correlation information, reduce horizon range, add The calculating speed of fast system matches point, reduces the probability of error hiding.
Further, coordinate system, the real-time scene in the second two-way video image information are established in display interface Any point calculated by following formula:
Pc(xc,yc,zc)=(BXL| D, BY | D, BF | D) ... ... .. (1),
Wherein, Pc(xc, yc, zc) be the second two-way video image information in real-time scene any point;B is current The practical interpupillary distance of observer;D is the parallax of stereopsis pair, D=XL-XR;F is two focal lengths of current observer;X be from The direction of left eye direction right eye;XLTo put in the relative position of X-axis;Y is the Y-axis in coordinate system vertically downward;Z is to display circle The direction of face observation;
The picture point correlation information is calculated by following formula:
Wherein, W is picture point correlation;dmaxFor maximum sighting distance;dminFor minimum sighting distance;M is template size, i.e. display interface Size;IrightFor the image on the right;IleftFor the image on the left side;I is distance of the left view point to display interface center;D is a left side Distance of the viewpoint to interface boundary;X, y are harness coordinate of the picture point in the image of left and right.
Beneficial effect using above-mentioned further scheme is: to the real-time scene in multiple second two-way video image informations Any point analyzed, generate picture point correlation information, by obtain picture point correlation information, reduce horizon range, add The calculating speed of fast system matches point, reduces the probability of error hiding.
Further, the parallax angle information is calculated by following formula:
β=2arctan (D/2f),
Wherein, β is parallactic angle;D is the parallax of stereopsis pair;F is distance of observer's eyes to display interface;
The regulatory factor information is calculated by following formula:
Wherein, delta is regulatory factor information;ω is the width of stereopsis pair, and e is the practical pupil of current observer Away from;F is distance of observer's eyes to display interface;φ is the aperture of display interface;
It is described to be calculated at the maximum disparity angular motion modal constraint information of current observer by following formula:
Wherein, f is distance of observer's eyes to display interface;P is the reality in multiple second two-way video image informations When scene any point;β is parallactic angle.
Using the beneficial effect of above-mentioned further scheme is: to the picture point correlation information of disparity range information, acquisition with And the regulatory factor information obtained is analyzed, and the maximum disparity angular motion modal constraint information of current observer is generated, and improves 3D figure The identification of picture reduces the distortion rate of 3D rendering, improves user experience.
In addition, the present invention provides a kind of vision self-adapting naked eye 3D rendering processing units comprising:
Memory 1, for storing computer program;
Processor 2 realizes as above described in any item vision self-adapting naked eye 3D figures for executing the computer program As processing method.
The beneficial effects of the present invention are: passing through the data according to the practical interpupillary distance data information of current observer to 3D rendering Source is adjusted, and generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video from more multi-angle of view Or image is attained by ideal 3D effect, improves user experience.
In addition, instruction is stored in the storage medium the present invention provides a kind of storage medium, when computer reads institute When stating instruction, the computer is made to execute a kind of as above described in any item vision self-adapting naked eye 3D rendering processing methods.
The beneficial effects of the present invention are: passing through the data according to the practical interpupillary distance data information of current observer to 3D rendering Source is adjusted, and generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video from more multi-angle of view Or image is attained by ideal 3D effect, improves user experience.
The advantages of additional aspect of the invention, will be set forth in part in the description, and will partially become from the following description It obtains obviously, or practice is recognized through the invention.
Detailed description of the invention
Fig. 1 is a kind of schematic flow of vision self-adapting naked eye 3D rendering processing method provided in an embodiment of the present invention Figure.
Fig. 2 is a kind of schematic structure frame of vision self-adapting naked eye 3D rendering processing unit provided in an embodiment of the present invention Figure.
Fig. 3 be a kind of schematic illustration of vision self-adapting naked eye 3D rendering processing method provided in an embodiment of the present invention it One.
Fig. 4 be a kind of schematic illustration of vision self-adapting naked eye 3D rendering processing method provided in an embodiment of the present invention it Two.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the invention.
As shown in Figures 1 to 4, Fig. 1 is a kind of vision self-adapting naked eye 3D rendering processing side provided in an embodiment of the present invention The schematic flow chart of method.Fig. 2 is a kind of showing for vision self-adapting naked eye 3D rendering processing unit provided in an embodiment of the present invention Meaning property structural block diagram.Fig. 3 is that a kind of principle of vision self-adapting naked eye 3D rendering processing method provided in an embodiment of the present invention is shown One of be intended to.Fig. 4 is a kind of schematic illustration of vision self-adapting naked eye 3D rendering processing method provided in an embodiment of the present invention Two.
The embodiment of the invention provides a kind of vision self-adapting naked eye 3D rendering processing methods comprising:
Obtain the image information of current observer's face feature;
The image information of current observer's face feature is analyzed, the practical interpupillary distance number of current observer is generated It is believed that breath;
The data source of 3D rendering is adjusted according to the practical interpupillary distance data information of the current observer, generates and works as The adaptable 3D rendering of the interpupillary distance of preceding observer.
In above-described embodiment, by being carried out according to the practical interpupillary distance data information of current observer to the data source of 3D rendering It adjusts, generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video or image from more multi-angle of view It is attained by ideal 3D effect, improves user experience.
Specifically, the image data comprising observer's face feature is obtained;According to described image data, it is based on observer's face Portion's signature analysis obtains left-eye image position and eye image position;According to the left-eye image position and eye image position, Obtain eye image interpupillary distance value;The size factor testing result for obtaining real space, according to the result by the eye image pupil Away from value, the practical interpupillary distance value that is converted under three-dimensional space system;Export the practical interpupillary distance value.
The automatic interpupillary distance for obtaining observation in real time, improves the real-time of interpupillary distance detection.And it is obtained in real time automatically based on this Interpupillary distance can automatically adjust the data source of 3D picture with the interpupillary distance of observer itself, so that 3D display effect can The interpupillary distance of follow-observation person changes.
The automatic image data for obtaining observer's face feature is realized, and practical interpupillary distance value can be exported;Improve interpupillary distance inspection The real-time of survey can automatically adjust the data source of 3D picture with the interpupillary distance of observer itself.
The image information to current observer's face feature is analyzed, the reality of current observer is generated The step of border interpupillary distance data information, may include:
Obtain the left-eye image location information of observer and right eye position in the image information of current observer's face feature Confidence breath;
The left-eye image location information and the right eye position information are analyzed, observer's face feature is generated Image interpupillary distance data information;
According to the size factor detection information of the real space of acquisition to the image interpupillary distance number of observer's face feature It is believed that breath is converted, the practical interpupillary distance data information of the current observer under three-dimensional coordinate system is generated.
In above-described embodiment, the image of the size factor detection information of the real space that will acquire to observer's face feature Interpupillary distance data information is converted, and is generated the practical interpupillary distance data information of the current observer under three-dimensional coordinate system, is passed through The data source of 3D rendering is adjusted according to the practical interpupillary distance data information of the current observer, generation and current observer The adaptable 3D rendering of interpupillary distance.So that it is attained by ideal 3D effect from more multi-angle of view viewing naked eye 3D video or image, Improve user experience.
The data source of 3D rendering is adjusted according to the practical interpupillary distance data information of the current observer for described, The step of 3D rendering that the interpupillary distance of generation and current observer are adapted, may include:
Acquire the first two-way video image information;
Real Time Compression, decoding and distortion correction are carried out to the two-way video image information, generate the second two-way video Image information;
According to the practical interpupillary distance data information of the current observer to the reality in the second two-way video image information When scene carry out depth calculation and synthesis, generate the 3D rendering adaptable with the interpupillary distance of current observer.
Wherein, described that the second two-way video image is believed according to the practical interpupillary distance data information of the current observer Real-time scene in breath carries out depth calculation and synthesis, generates the step of the 3D rendering adaptable with the interpupillary distance of current observer Suddenly, may include:
Obtain the parallax angle information in the real-time scene in the second two-way video image information;
The parallax angle information is analyzed, disparity range information is generated;
The regulatory factor information of picture point correlation information and acquisition to the disparity range information, acquisition is divided Analysis, generates the maximum disparity angular motion modal constraint information of current observer;
It is neutral to the second two-way video image information according to the maximum disparity angular motion modal constraint information of the current observer The synthesis offset of body image pair is handled, and generates the 3D rendering adaptable with the interpupillary distance of current observer.
In above-described embodiment, according to the practical interpupillary distance data information of current observer in the second two-way video image information Real-time scene carry out depth calculation and synthesis, generate the 3D rendering adaptable with the interpupillary distance of current observer.Pass through basis The data source of 3D rendering is adjusted in the practical interpupillary distance data information of current observer, generates the interpupillary distance phase with current observer The 3D rendering of adaptation.So that being attained by ideal 3D effect from more multi-angle of view viewing naked eye 3D video or image, user is improved Experience.
Specifically, using calibrated two-way video high score color acquisition device, an important link is in video acquisition The synchronization of right and left eyes image, it is synchronized after left and right image constitute three-dimensional influence pair.
Stereopsis is compressed, to transmit, Real-time Video Compression can be by the coding core that is solidificated in system Piece is realized, can also be realized by software through coding.Compressed stereopsis is to video server is transmitted to, by decoding It is reduced into RAW format picture data, corrects the distortion in stereopsis left and right image edge as caused by display interface.It is auxiliary Help the relative displacement for adjusting stereopsis to synthesis when.Stereopsis carries out the picture format required according to stereoscopic display Synthesis, and be transmitted on dimensional image display and shown.
Further, the picture point correlation information is obtained by following methods:
Obtain any point of the real-time scene in multiple second two-way video image informations;
Any point of real-time scene in multiple second two-way video image informations is analyzed, picture point is generated Correlation information.
In above-described embodiment, any point of the real-time scene in multiple second two-way video image informations is divided Analysis generates picture point correlation information, by obtaining picture point correlation information, reduces horizon range, accelerates the meter of system matches point Speed is calculated, the probability of error hiding is reduced.
Further, coordinate system, the real-time scene in the second two-way video image information are established in display interface Any point calculated by following formula:
Pc(xc, yc,zc)=(BXL| D, BY | D, BF | D),
Wherein, Pc(xc, yc, zc) be the second two-way video image information in real-time scene any point;B is current The practical interpupillary distance of observer;D is the parallax of stereopsis pair, D=XL-XR;F is two focal lengths of current observer;X be from The direction of left eye direction right eye;XLTo put in the relative position of X-axis;Y is the Y-axis in coordinate system vertically downward;Z is to display circle The direction of face observation;
The picture point correlation information is calculated by following formula:
Wherein, W is picture point correlation;dmaxFor maximum sighting distance;dminFor minimum sighting distance;M is template size, i.e. display interface Size;IrightFor the image on the right;IleftFor the image on the left side;I is distance of the left view point to display interface center;D is a left side Distance of the viewpoint to interface boundary;X, y are harness coordinate of the picture point in the image of left and right.
In above-described embodiment, any point of the real-time scene in multiple second two-way video image informations is divided Analysis generates picture point correlation information, by obtaining picture point correlation information, reduces horizon range, accelerates the meter of system matches point Speed is calculated, the probability of error hiding is reduced.
The depth of field of stereo scene is by carrying out pixel characteristic matching primitives, any point in visual field in the image of left and right Pc(xc,yc,zc) coordinate under display interface coordinate system can be calculated by formula (1).
For stereopsis to any one picture point on the image of the left side, as long as can find corresponding on the image of the right Reference point, then the depth coordinate of the point can be calculated with formula (1).
In formula (2), maximum sighting distance defines the nearest object of the distance that can be detected, and the sighting distance of 0 pixel represents nothing The object of poor distant place.The calculating speed of system matches point can be accelerated by reducing horizon range, and reduce the probability of error hiding.
Further, the parallax angle information is calculated by following formula:
β=2arctan (D/2f) ... ... ... ... (3),
Wherein, β is parallactic angle;D is the parallax of stereopsis pair;F is distance of observer's eyes to display interface;
The regulatory factor information is calculated by following formula:
Wherein, delta is regulatory factor information;ω is the width of stereopsis pair, and e is the practical pupil of current observer Away from;F is distance of observer's eyes to display interface;φ is the aperture of display interface;
It is described to be calculated at the maximum disparity angular motion modal constraint information of current observer by following formula:
Wherein, f is distance of observer's eyes to display interface;P is the reality in multiple second two-way video image informations When scene any point;β is parallactic angle.
In above-described embodiment, the regulatory factor of picture point correlation information and acquisition to disparity range information, acquisition is believed Breath is analyzed, and the maximum disparity angular motion modal constraint information of current observer is generated, and improves the identification of 3D rendering, reduces 3D figure The distortion rate of picture improves user experience.
Observer's disparity range allowed when observing a stereopsis is limited, and disparity range can use view Declinate statement, parallactic angle are calculated by formula (3).
Observer can received maximum disparity angle beta be less than or equal to 1.5 degree.When parallactic angle β is greater than 1.5 degree, observer It will feel apparent discomfort, and the stereoscopic effect of resultant image can also decline to a great extent.
In order to ensure synthesis stereopsis parallax be not more than maximum disparity angle, defined formula (4) come indicate adjust because Son.
In formula (4), if the value of the f calculated with formula (3), the synthesis offset of stereopsis pair can use sight The maximum disparity angular motion modal constraint for the person of examining, formula (4) and formula (5) ensure that stereopsis to meeting observer's vision requirement The real-time adjusting of synthesis.
In addition, the embodiment of the invention provides a kind of vision self-adapting naked eye 3D rendering processing units comprising:
Memory, for storing computer program;
Processor realizes as above described in any item vision self-adapting naked eye 3D figures for executing the computer program As processing method.
In above-described embodiment, by being carried out according to the practical interpupillary distance data information of current observer to the data source of 3D rendering It adjusts, generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video or image from more multi-angle of view It is attained by ideal 3D effect, improves user experience.
In addition, being stored with instruction in the storage medium the embodiment of the invention provides a kind of storage medium, working as computer When reading described instruction, the computer is made to execute a kind of as above described in any item vision self-adapting naked eye 3D rendering processing sides Method.
In above-described embodiment, by being carried out according to the practical interpupillary distance data information of current observer to the data source of 3D rendering It adjusts, generates the 3D rendering adaptable with the interpupillary distance of current observer.So that watching naked eye 3D video or image from more multi-angle of view It is attained by ideal 3D effect, improves user experience.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (9)

1. a kind of vision self-adapting naked eye 3D rendering processing method characterized by comprising
Obtain the image information of current observer's face feature;
The image information of current observer's face feature is analyzed, the practical interpupillary distance data letter of current observer is generated Breath;
The data source of 3D rendering is adjusted according to the practical interpupillary distance data information of the current observer, generates and is seen with current The adaptable 3D rendering of the interpupillary distance for the person of examining.
2. a kind of vision self-adapting naked eye 3D rendering processing method according to claim 1, which is characterized in that described to institute The image information for stating current observer's face feature is analyzed, and the step of the practical interpupillary distance data information of current observer is generated Suddenly, comprising:
Obtain left-eye image location information and the right eye position letter of observer in the image information of current observer's face feature Breath;
The left-eye image location information and the right eye position information are analyzed, the figure of observer's face feature is generated As interpupillary distance data information;
Believed according to image interpupillary distance data of the size factor detection information of the real space of acquisition to observer's face feature Breath is converted, and the practical interpupillary distance data information of the current observer under three-dimensional coordinate system is generated.
3. a kind of vision self-adapting naked eye 3D rendering processing method according to claim 1, which is characterized in that the basis The data source of 3D rendering is adjusted in the practical interpupillary distance data information of the current observer, generates the pupil with current observer The step of away from adaptable 3D rendering, comprising:
Acquire the first two-way video image information;
Real Time Compression, decoding and distortion correction are carried out to the two-way video image information, generate the second two-way video image Information;
According to the practical interpupillary distance data information of the current observer to the real-time field in the second two-way video image information Scape carries out depth calculation and synthesis, generates the 3D rendering adaptable with the interpupillary distance of current observer.
4. a kind of vision self-adapting naked eye 3D rendering processing method according to claim 3, which is characterized in that the basis The practical interpupillary distance data information of the current observer carries out the real-time scene in the second two-way video image information deep The step of degree calculates and synthesis, generates the 3D rendering adaptable with the interpupillary distance of current observer, comprising:
Obtain the parallax angle information in the real-time scene in the second two-way video image information;
The parallax angle information is analyzed, disparity range information is generated;
The regulatory factor information of picture point correlation information and acquisition to the disparity range information, acquisition is analyzed, raw At the maximum disparity angular motion modal constraint information of current observer;
According to the maximum disparity angular motion modal constraint information of the current observer to the second two-way video image information neutral body shadow As pair synthesis offset handle, generate the 3D rendering adaptable with the interpupillary distance of current observer.
5. a kind of vision self-adapting naked eye 3D rendering processing method according to claim 4, which is characterized in that the picture point Correlation information is obtained by following methods:
Obtain any point of the real-time scene in multiple second two-way video image informations;
Any point of real-time scene in multiple second two-way video image informations is analyzed, it is related to generate picture point Property information.
6. a kind of vision self-adapting naked eye 3D rendering processing method according to claim 5, which is characterized in that on display circle Coordinate system is established in face, any point of the real-time scene in the second two-way video image information passes through following formula meters It calculates:
Pc(xc,yc,zc)=(BXL| D, BY | D, BF | D),
Wherein, Pc(xc,yc,zc) be the second two-way video image information in real-time scene any point;B is current observer Practical interpupillary distance;D is the parallax of stereopsis pair, D=XL-XR;F is two focal lengths of current observer;X is to refer to from left eye To the direction of right eye;XLTo put in the relative position of X-axis;Y is the Y-axis in coordinate system vertically downward;Z is to observe to display interface Direction;
The picture point correlation information is calculated by following formula:
Wherein, W is picture point correlation;dmaxFor maximum sighting distance;dminFor minimum sighting distance;M is template size, i.e. the ruler of display interface It is very little;IrightFor the image on the right;IleftFor the image on the left side;I is distance of the left view point to display interface center;D is left view point To the distance of interface boundary;X, y are harness coordinate of the picture point in the image of left and right.
7. a kind of vision self-adapting naked eye 3D rendering processing method according to claim 4, which is characterized in that the parallax Angle information is calculated by following formula:
β=2arctan (D/2f),
Wherein, β is parallactic angle;D is the parallax of stereopsis pair;F is distance of observer's eyes to display interface;
The regulatory factor information is calculated by following formula:
Wherein, delta is regulatory factor information;ω is the width of stereopsis pair, and e is the practical interpupillary distance of current observer;F is Distance of observer's eyes to display interface;φ is the aperture of display interface;
It is described to be calculated at the maximum disparity angular motion modal constraint information of current observer by following formula:
Wherein, f is distance of observer's eyes to display interface;P is the real-time field in multiple second two-way video image informations Any point of scape;β is parallactic angle.
8. a kind of vision self-adapting naked eye 3D rendering processing unit characterized by comprising
Memory, for storing computer program;
Processor realizes the vision self-adapting as described in any one of claims 1 to 7 for executing the computer program Naked eye 3D rendering processing method.
9. a kind of storage medium, which is characterized in that instruction is stored in the storage medium, when computer reads described instruction When, so that the computer is executed a kind of vision self-adapting naked eye 3D rendering processing side as described in any one of claims 1 to 7 Method.
CN201910080968.2A 2019-01-28 2019-01-28 A kind of vision self-adapting naked eye 3D rendering processing method and processing device Pending CN109819231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910080968.2A CN109819231A (en) 2019-01-28 2019-01-28 A kind of vision self-adapting naked eye 3D rendering processing method and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910080968.2A CN109819231A (en) 2019-01-28 2019-01-28 A kind of vision self-adapting naked eye 3D rendering processing method and processing device

Publications (1)

Publication Number Publication Date
CN109819231A true CN109819231A (en) 2019-05-28

Family

ID=66605388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910080968.2A Pending CN109819231A (en) 2019-01-28 2019-01-28 A kind of vision self-adapting naked eye 3D rendering processing method and processing device

Country Status (1)

Country Link
CN (1) CN109819231A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110308790A (en) * 2019-06-04 2019-10-08 宁波视睿迪光电有限公司 The image adjusting device and system of teaching demonstration
CN111918052A (en) * 2020-08-14 2020-11-10 广东申义实业投资有限公司 Vertical rotary control device and method for converting plane picture into 3D image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260040A (en) * 2013-04-12 2013-08-21 南京熊猫电子制造有限公司 3D display adaptive adjusting method based on human vision characteristics
CN104320647A (en) * 2014-10-13 2015-01-28 深圳超多维光电子有限公司 Three-dimensional image generating method and display device
CN105611278A (en) * 2016-02-01 2016-05-25 欧洲电子有限公司 Image processing method and system for preventing naked eye 3D viewing dizziness and display device
US20160156896A1 (en) * 2014-12-01 2016-06-02 Samsung Electronics Co., Ltd. Apparatus for recognizing pupillary distance for 3d display
CN105704479A (en) * 2016-02-01 2016-06-22 欧洲电子有限公司 Interpupillary distance measuring method and system for 3D display system and display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260040A (en) * 2013-04-12 2013-08-21 南京熊猫电子制造有限公司 3D display adaptive adjusting method based on human vision characteristics
CN104320647A (en) * 2014-10-13 2015-01-28 深圳超多维光电子有限公司 Three-dimensional image generating method and display device
US20160156896A1 (en) * 2014-12-01 2016-06-02 Samsung Electronics Co., Ltd. Apparatus for recognizing pupillary distance for 3d display
CN105611278A (en) * 2016-02-01 2016-05-25 欧洲电子有限公司 Image processing method and system for preventing naked eye 3D viewing dizziness and display device
CN105704479A (en) * 2016-02-01 2016-06-22 欧洲电子有限公司 Interpupillary distance measuring method and system for 3D display system and display device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110308790A (en) * 2019-06-04 2019-10-08 宁波视睿迪光电有限公司 The image adjusting device and system of teaching demonstration
CN111918052A (en) * 2020-08-14 2020-11-10 广东申义实业投资有限公司 Vertical rotary control device and method for converting plane picture into 3D image

Similar Documents

Publication Publication Date Title
KR101761751B1 (en) Hmd calibration with direct geometric modeling
JP3826236B2 (en) Intermediate image generation method, intermediate image generation device, parallax estimation method, and image transmission display device
US9277207B2 (en) Image processing apparatus, image processing method, and program for generating multi-view point image
US9699438B2 (en) 3D graphic insertion for live action stereoscopic video
US9600714B2 (en) Apparatus and method for calculating three dimensional (3D) positions of feature points
CN107105333A (en) A kind of VR net casts exchange method and device based on Eye Tracking Technique
WO2017156905A1 (en) Display method and system for converting two-dimensional image into multi-viewpoint image
CN101247530A (en) Three-dimensional image display apparatus and method for enhancing stereoscopic effect of image
KR100560464B1 (en) Multi-view display system with viewpoint adaptation
MX2020009791A (en) Multifocal plane based method to produce stereoscopic viewpoints in a dibr system (mfp-dibr).
JP2010250452A (en) Arbitrary viewpoint image synthesizing device
US8094148B2 (en) Texture processing apparatus, method and program
US20120218393A1 (en) Generating 3D multi-view interweaved image(s) from stereoscopic pairs
CN105812766B (en) A kind of vertical parallax method for reducing
TW201315209A (en) System and method of rendering stereoscopic images
WO2022267573A1 (en) Switching control method for glasses-free 3d display mode, and medium and system
Bleyer et al. Temporally consistent disparity maps from uncalibrated stereo videos
CN113253845A (en) View display method, device, medium and electronic equipment based on eye tracking
CN109819231A (en) A kind of vision self-adapting naked eye 3D rendering processing method and processing device
CN104159099A (en) Method of setting binocular stereoscopic camera in 3D stereoscopic video production
Knorr et al. An image-based rendering (ibr) approach for realistic stereo view synthesis of tv broadcast based on structure from motion
KR101841750B1 (en) Apparatus and Method for correcting 3D contents by using matching information among images
KR101634225B1 (en) Device and Method for Multi-view image Calibration
KR20110025083A (en) Apparatus and method for displaying 3d image in 3d image system
US9269177B2 (en) Method for processing image and apparatus for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190528