CN109104603B - Viewpoint compensation method and device, electronic equipment and storage medium - Google Patents
Viewpoint compensation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109104603B CN109104603B CN201811119841.9A CN201811119841A CN109104603B CN 109104603 B CN109104603 B CN 109104603B CN 201811119841 A CN201811119841 A CN 201811119841A CN 109104603 B CN109104603 B CN 109104603B
- Authority
- CN
- China
- Prior art keywords
- distance
- position coordinate
- determining
- virtual camera
- viewpoint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The embodiment of the invention discloses a viewpoint compensation method, a viewpoint compensation device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a first position coordinate and a second position coordinate of a tracked target at the current moment and the first delayed moment; determining a distance difference between the first position coordinate and the second position coordinate; determining view offset between views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference; updating a distance adjustment parameter for determining the distance between the left virtual camera and the right virtual camera according to the distance difference; and determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount. By adopting the technical scheme, the probability of occurrence of crosstalk phenomenon of left and right eye views caused by left and right movement of the tracked object when the naked eye 3D image is watched is reduced, and the watching experience of a user is improved.
Description
Technical Field
Embodiments of the present invention relate to stereoscopic display technologies, and in particular, to a method and an apparatus for viewpoint compensation, an electronic device, and a storage medium.
Background
When naked eye 3D display is carried out on naked eye 3D display equipment, firstly, arrangement is carried out on a stereoscopic image, the stereoscopic image comprises a left eye view and a right eye view, the arrangement, namely the left eye view and the right eye view, are arranged and displayed on a display panel according to a certain rule, and meanwhile, a light splitting device arranged on the display panel is matched, for example, the light splitting effect of a cylindrical lens is achieved, the left eye view is sent to the left eye of a user, and the right eye view is sent to the right eye of the user, so that the user can watch a 3D image.
However, due to hardware limitations of the eye tracking device and time-consuming software calculation, the optimal layout mode cannot be given with low delay, so that when the viewer moves left and right fast parallel to the display screen, the eye coordinates obtained by the eye tracking device have a certain spatial delay. Therefore, performing optimal mapping based on the obtained eye coordinates inevitably leads to a reduction in 3D stereoscopic display effect, causing a severe crosstalk phenomenon between left and right eye images when viewed by a user.
Disclosure of Invention
The invention provides a viewpoint compensation method, a viewpoint compensation device, electronic equipment and a storage medium, which are used for reducing the probability of crosstalk of left and right eye images when human eyes move in a naked eye 3D display technology.
In a first aspect, an embodiment of the present invention provides a viewpoint compensation method, including:
acquiring a first position coordinate of a tracked target at the current moment;
acquiring a second position coordinate of the tracked target at the corresponding moment after the first delay;
determining a distance difference between the first position coordinate and the second position coordinate;
determining view offset among views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference;
updating a distance adjusting parameter according to the distance difference; wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera;
and determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount.
In a second aspect, an embodiment of the present invention further provides a viewpoint compensation apparatus, including:
the first position coordinate acquisition module is used for acquiring a first position coordinate of a tracked target at the current moment;
a second position coordinate obtaining module, configured to obtain a second position coordinate of the tracked target at a corresponding time after the first delay;
a distance difference determination module for determining a distance difference between the first position coordinate and the second position coordinate;
the view offset determining module is used for determining view offsets among the views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference;
the distance adjusting parameter updating module is used for updating the distance adjusting parameters according to the distance difference; wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera;
and the viewpoint compensation amount determining module is used for determining the viewpoint compensation amount according to the distance adjusting parameter and the view offset so as to perform viewpoint compensation according to the viewpoint compensation amount.
In a third aspect, an embodiment of the present invention further provides an electronic device, including an eye tracking device and a display device, further including:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement a viewpoint compensation method as provided in an embodiment of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement a viewpoint compensation method as provided in the embodiment of the first aspect.
The embodiment of the invention acquires the first position coordinate and the second position coordinate of the tracked target at the current moment and the first delayed moment; determining a distance difference between the first position coordinate and the second position coordinate; determining view offset between views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference; updating a distance adjustment parameter for determining the distance between the left virtual camera and the right virtual camera according to the distance difference; and determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount. By adopting the technical scheme, the probability of occurrence of crosstalk phenomenon of left and right eye views caused by left and right movement of the tracked object when the naked eye 3D image is watched is reduced, and the watching experience of a user is improved.
Drawings
Fig. 1 is a flowchart of a viewpoint compensation method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a viewpoint compensation method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a viewpoint compensation method in a third embodiment of the present invention;
fig. 4 is a flowchart of a viewpoint compensation method in a fourth embodiment of the present invention;
fig. 5 is a structural diagram of a viewpoint compensating apparatus in a fifth embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a viewpoint compensation method according to a first embodiment of the present invention. The embodiment of the invention is suitable for the condition of watching naked eye 3D display pictures, the method is executed by a viewpoint compensation device, and the device is realized by software and/or hardware and is specifically configured in electronic equipment for naked eye 3D display.
The view point compensation method as shown in fig. 1 includes:
and S110, acquiring a first position coordinate of the tracked target at the current moment.
The tracked target is the left eye and/or the right eye of a user watching the naked eye 3D display picture. The first position coordinate is a left eye coordinate and/or a right eye coordinate of eyes of a viewer at the current moment, or an eyebrow center coordinate corresponding to a midpoint of the left eye and the right eye. Wherein the first position coordinate is a spatial three-dimensional coordinate.
Specifically, left eye coordinates corresponding to a left eye and/or right eye coordinates corresponding to a right eye of a viewing user at the current time are acquired from the human eye tracking device as first position coordinates.
And S120, acquiring a second position coordinate of the tracked target at the corresponding moment after the first delay.
The first delay is a preset delay time, and is set by a technician according to an empirical value.
Specifically, left eye coordinates and/or right eye coordinates corresponding to the left eye and/or the right eye of the same viewing user at the same time after the first delay are obtained from the human eye tracking device and serve as second position coordinates. And the second position coordinate is a three-dimensional space coordinate.
And S130, determining a distance difference value between the first position coordinate and the second position coordinate.
And determining a distance difference value between the first position coordinate and the second position coordinate according to a three-dimensional space Euclidean distance calculation formula.
In particular, according to the formulaDetermining a distance difference between the first position coordinate and the second position coordinate; where dist is the distance difference, (x)t,yt,zt) For viewing the user's left, right or eyebrow coordinates at the current time, (x)t+Δt,yt+Δt,zt+Δt) For watching the left eye, right eye or eyebrow center coordinates of the user at the corresponding time after the first delay, (a)x,ay,az) Weight values in different directions; the x direction is parallel to the horizontal direction of the naked eye 3D picture display screen; the y direction is parallel to the vertical direction of the naked eye 3D picture display screen; the z direction is a direction perpendicular to the naked eye 3D picture display screen. The eyebrow center coordinate is obtained by summing and averaging the left eye coordinate and the right eye coordinate. It should be noted that the left-eye coordinate and the right-eye coordinate are both coordinates of the virtual world corresponding to the 3D display screen.
S140, determining view offset among views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference.
And the view offset is the offset distance between the current moment and the view displayed at the corresponding moment after the first delay. The view offset is an offset value of the virtual world corresponding to the 3D display picture.
And S150, updating the distance adjusting parameter according to the distance difference.
Wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera. The distance adjusting parameter is a ratio of a distance between the left virtual camera and the right virtual camera to a maximum distance.
The left virtual camera is used for rendering a left eye view viewed by the left eye of the user, and the right virtual camera is used for rendering a right eye view viewed by the right eye of the user.
Illustratively, the distance adjustment parameter may be updated positively or negatively depending on the magnitude of the distance difference. Wherein, the distance between the left virtual camera and the right virtual camera is correspondingly increased by the forward updating distance adjusting parameter; and the negative updating distance adjusting parameter correspondingly reduces the distance between the left virtual camera and the right virtual camera.
And S160, determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, and performing viewpoint compensation according to the viewpoint compensation amount.
Determining a residual adjustable proportion of the distance between the left virtual camera and the right virtual camera according to the distance adjusting parameters, and determining a viewpoint compensation amount according to the residual adjustable proportion and the view offset; and determining the current viewpoint of the tracked target, and superposing the viewpoint compensation quantity on the current viewpoint to obtain a target viewpoint so as to determine display layout according to the target viewpoint.
The embodiment of the invention acquires the first position coordinate and the second position coordinate of the tracked target at the current moment and the first delayed moment; determining a distance difference between the first position coordinate and the second position coordinate; determining view offset between views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference; updating a distance adjustment parameter for determining the distance between the left virtual camera and the right virtual camera according to the distance difference; and determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount. By adopting the technical scheme, the probability of occurrence of crosstalk phenomenon of left and right eye views caused by left and right movement of the tracked object when the naked eye 3D image is watched is reduced, and the watching experience of a user is improved.
Example two
Fig. 2 is a flowchart of a viewpoint compensation method in the second embodiment of the present invention. The embodiment of the invention carries out subdivision optimization on the basis of the technical scheme of each embodiment.
Further, when the distance difference is not less than a crosstalk threshold, the operation of determining the view offset between the views viewed at different times is "refined" to "determine the moving direction of the tracked target according to the magnitude of the first component of the first position coordinate and the first component of the second position coordinate" according to the first position coordinate, the second position coordinate and the distance difference; determining the moving distance of the tracked target according to the distance difference and the crosstalk threshold; and determining the view offset according to the moving direction and the moving distance so as to perfect a mechanism for determining the corresponding view offset when the watching user moves left and right in parallel to the display screen too fast.
Further, when the distance difference is nonzero and smaller than a crosstalk threshold, the operation of determining the view offset between the views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference is refined into the operation of determining the view offset to be 0 so as to perfect a determination mechanism of the corresponding view offset when the watching user does not move left and right or slowly moves parallel to the display screen.
The view point compensation method as shown in fig. 2 includes:
s210, obtaining a first position coordinate of the tracked target at the current moment.
And averaging the left eye coordinate and the right eye coordinate of the current moment to obtain an eyebrow center coordinate as a first position coordinate according to the left eye coordinate and the right eye coordinate of the human eye tracking module at the current moment.
And S220, acquiring a second position coordinate of the tracked target at the corresponding moment after the first delay.
And averaging the left eye coordinate and the right eye coordinate of the corresponding moment after the first delay to obtain an eyebrow coordinate as a second position coordinate according to the left eye coordinate and the right eye coordinate of the human eye tracking module at the corresponding moment after the first delay.
And S230, determining a distance difference value between the first position coordinate and the second position coordinate.
S240, judging whether the distance difference value is not less than a crosstalk threshold value; if yes, executing S251; if not, S252 is executed.
Comparing the distance difference with a crosstalk threshold, wherein when the distance difference is greater than or equal to the crosstalk threshold, it indicates that the speed of moving the user left and right along the x direction is too high in the first delay time period, so that viewpoint compensation is required to be performed to avoid the crosstalk phenomenon of left and right eye images when the user watches the image. When the distance difference is smaller than the crosstalk threshold, it indicates that the crosstalk is not generated at the speed of observing the user moving left and right in the x direction in the first delay time period.
Wherein the crosstalk threshold is determined by a skilled person based on empirical values or experimental results.
S251, determining view offset among views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference; execution continues with S260.
When the distance difference is greater than or equal to the crosstalk threshold, it indicates that the viewing user moves too fast within the first delay time period, and thus the view viewed by the user during this time period may have an offset.
And determining the offset of the view watched by the user at the current moment and the corresponding moment after the first delay according to the first position coordinate, the second position coordinate and the distance difference value between the first position coordinate and the second position coordinate.
Illustratively, determining a moving direction of the tracked target according to magnitudes of the first component of the first position coordinate and the first component of the second position coordinate; determining the moving distance of the tracked target according to the distance difference and the crosstalk threshold; and determining the view offset according to the moving direction and the moving distance. Wherein the first component is an x-direction component.
For example, determining the moving direction of the tracked target according to the magnitude of the first component in the first position coordinate and the second position coordinate may be: if the first component in the second position coordinate is smaller than the first component in the first position coordinate, determining that the moving direction is a negative direction; and if the first component in the second position coordinate is not smaller than the first component in the first position coordinate, determining that the moving direction is a positive direction. Wherein, negative direction and positive direction all are parallel with the x direction, and negative direction and positive direction opposite direction.
For example, determining the moving distance of the tracked target according to the distance difference and the crosstalk threshold may be: according to the formula hmd ═ (2 × dist/thr)c)0.5Determining the moving distance of the tracked target; wherein hmd is the movement distance, dist is the distance difference, thrcIs the crosstalk threshold.
For example, determining the view offset according to the moving direction and the moving distance may be: setting the absolute value of the view offset as a moving distance; setting the view offset as a negative value when the moving direction is a negative direction; when the moving direction is a positive direction, the view shift amount is set to a positive value.
S252, determining the view offset to be 0; execution continues with S260.
When the distance difference is smaller than the crosstalk threshold, it indicates that the movement of the viewing user in the first delay time period does not generate crosstalk, so that the view viewed by the user in this time period does not have an offset, or the offset does not cause discomfort to the viewing user, and therefore, the view offset may be set to 0 to ignore it.
S260, updating a distance adjusting parameter according to the distance difference; wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera.
Specifically, if the distance difference is not smaller than the crosstalk threshold, updating the distance adjustment parameter according to a negative adjustment factor; if the distance difference value is nonzero and smaller than a crosstalk threshold value, updating the distance adjusting parameter according to a positive adjusting factor; and if the distance difference is zero, keeping the distance adjusting parameter unchanged.
Wherein, updating the distance adjustment parameter according to the negative adjustment factor specifically comprises: and superposing the negative regulating factor on the basis of the distance regulating parameter to obtain a new distance regulating parameter. Since the negative adjustment factor is a negative value, the obtained new distance adjustment parameter is smaller than the distance adjustment parameter that is not updated, and accordingly, the distance between the left virtual camera and the right virtual camera determined according to the new distance adjustment parameter will also become smaller.
Wherein, updating the distance adjustment parameter according to the positive adjustment factor specifically comprises: and superposing the positive adjustment factor on the basis of the distance adjustment parameter to obtain a new distance adjustment parameter. Since the positive adjustment factor is a positive value, the obtained new distance adjustment parameter is larger than the distance adjustment parameter that is not updated, and accordingly, the distance between the left virtual camera and the right virtual camera determined according to the new distance adjustment parameter will also become larger.
S270, determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, and performing viewpoint compensation according to the viewpoint compensation amount.
According to the embodiment of the invention, when the distance difference is not less than the crosstalk threshold, the moving direction of the tracked target is determined according to the first component of the first position coordinate and the second position coordinate, the moving distance of the tracked target is determined according to the distance difference and the crosstalk threshold, and the view offset is finally determined according to the moving direction and the moving distance, so that a mechanism for determining the corresponding view offset when a viewing user moves left and right too fast in parallel with a display screen is perfected, a basis is provided for subsequent viewpoint compensation, and the occurrence of the crosstalk phenomenon is avoided. When the distance difference is smaller than the crosstalk threshold, the view offset is directly assigned to be 0, so that a determination mechanism of the corresponding view offset when a watching user does not move or slowly moves left and right parallel to the display screen is perfected, and unnecessary calculation is reduced.
EXAMPLE III
Fig. 3 is a flowchart of a viewpoint compensation method in a third embodiment of the present invention. The embodiment of the invention performs additional optimization on the basis of the technical scheme of each embodiment.
Further, after the operation of "obtaining the first position coordinate of the tracked target at the current time", additionally "obtaining a third position coordinate of the tracked target at a corresponding time after a second delay time"; determining the midpoint coordinate of the left virtual camera and the right virtual camera according to the first position coordinate and the third position coordinate and a preset interpolation function and an interpolation factor; adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate; and the second delay is less than the first delay, so that an optimally rendered picture is provided for a watching user, and visual discomfort brought to the user by the movement and jumping of the virtual camera is avoided.
Further, after the operation of "updating the distance adjustment parameter according to the distance difference", additionally "determining the target distance between the left virtual camera and the right virtual camera according to the distance adjustment parameter; and adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate and the target distance so as to avoid dizziness when a user watches naked eye 3D pictures when moving.
The view point compensation method as shown in fig. 3 includes:
s310, acquiring a first position coordinate of the tracked target at the current moment.
The tracked object is the left eye and the right eye of the watching user, and the corresponding first position coordinate is the eyebrow center coordinate determined by the average value of the left eye coordinate and the right eye coordinate.
S321, obtaining a third position coordinate of the tracked target at the corresponding time after the second delay.
And the third position coordinate is an eyebrow center coordinate determined by the average value of the left eye coordinate and the right eye coordinate of the same watching user at the corresponding moment after the second delay.
Wherein the second delay is set by a technician based on empirical values.
S322, determining the midpoint coordinate of the left virtual camera and the right virtual camera according to the first position coordinate and the third position coordinate, and a preset interpolation function and an interpolation factor.
Setting the first position coordinate as a starting point of interpolation, and setting the third position coordinate as a finishing point of interpolation; and according to a preset interpolation function and an interpolation factor, determining the midpoint coordinates of the left virtual camera and the right virtual camera through interpolation. The preset difference function may be a linear difference, a spline difference, or a piecewise interpolation.
S323, adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate.
Specifically, the midpoints of the left and right virtual cameras are adjusted to correspond to the midpoint coordinates, and the distance between the left and right virtual cameras is kept constant, so that the positions of the left and right virtual cameras are smoothly changed, and the best rendered picture can be provided for the viewing user by matching the movement of the left and right virtual cameras in the case that the user moves rapidly.
It should be noted that, in order to avoid the image blurring phenomenon caused by the frequent movement of the left and right virtual cameras, a determination condition "determining whether the distance between the first position coordinate and the third position coordinate is equal to or greater than the midpoint adjustment threshold" may be added before S322, and if the determination condition is satisfied, S322 is continuously performed, otherwise, S330 is directly performed while keeping the midpoint coordinate unchanged.
S330, acquiring a second position coordinate of the tracked target at the corresponding moment after the first delay.
Wherein the second delay is less than the first delay. Preferably, the first delay is an integer multiple of the second delay.
And S340, determining a distance difference value between the first position coordinate and the second position coordinate.
And S350, determining view offset among the views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference.
And S360, updating the distance adjusting parameter according to the distance difference.
S371, determining a target distance between the left virtual camera and the right virtual camera according to the distance adjusting parameter.
The distance adjusting parameter is a ratio of a distance between the left virtual camera and the right virtual camera to a maximum distance. And obtaining the target distance between the left virtual camera and the right virtual camera according to the product of the distance adjusting parameter and the preset maximum distance between the left virtual camera and the right virtual camera. The maximum distance between the left virtual camera and the right virtual camera corresponds to the interpupillary distance set by a watching user.
And S372, adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate and the target distance.
And determining the coordinates of the left virtual camera after the determined coordinates of the middle points are used as the positions of the middle points of the left virtual camera and the right virtual camera, respectively adjusting the left virtual camera by taking the coordinates of the middle points as the center, and determining the coordinates of the right virtual camera after the coordinates of the right virtual camera are adjusted rightwards, so that the distance between the adjusted left virtual camera coordinate and the adjusted right virtual camera coordinate is the target distance.
And S380, determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, and performing viewpoint compensation according to the viewpoint compensation amount.
According to the embodiment of the invention, the third position coordinate of the tracked target at the corresponding moment after the second delay is added, the midpoint coordinates of the left virtual camera and the right virtual camera are determined by interpolation according to the first position coordinate and the third position coordinate, the midpoint coordinates of the left virtual camera and the right virtual camera are adjusted according to the midpoint coordinates, and then the position coordinates of the left virtual camera and the right virtual camera are adjusted according to the midpoint coordinates, so that under the condition that a watching user moves left and right parallel to a display screen, the optimal rendered picture can be provided for the watching user by matching with the corresponding adjustment of the position coordinates of the left virtual camera and the right virtual camera, and meanwhile, the movement of the left virtual camera and the right virtual camera is smoothed, so that the visual discomfort brought to the user by the movement jump of the virtual camera is.
According to the embodiment of the invention, the target distance between the left virtual camera and the right virtual camera is determined according to the distance adjusting parameter, and the position coordinates of the left virtual camera and the right virtual camera are adjusted according to the center coordinate and the target distance, so that when a user watches the left and right movement of the user along the direction parallel to the display screen, the parallax between the left and right eye views is reduced by reducing or increasing the distance between the left and right virtual cameras, and the phenomenon that the user watches naked eye 3D pictures to have vertigo when the user moves is avoided.
Example four
Fig. 4 is a flowchart of a viewpoint compensation method in the fourth embodiment of the present invention. The embodiment of the invention provides a preferred implementation mode on the basis of the technical scheme of each embodiment.
The view point compensation method as shown in fig. 4 includes:
s410, acquiring the left eye coordinate and the right eye coordinate of the watching user at intervals of a set time threshold, and averaging the left eye coordinate and the right eye coordinate to obtain the eyebrow position coordinate.
In particular, according to the formulaObtaining the coordinates (x ', y ', z ') of the position of the eyebrow center; wherein (x)l,yl,zl) As the left eye coordinate, (x)r,yr,zr) Is the right eye coordinate.
Optionally, the time threshold is set to 1/60 seconds.
S411, determining a first distance difference value between the eyebrow center position coordinate of the current moment and the eyebrow center position coordinate obtained in the previous time.
In particular, according to the formulaDetermining a first distance difference value, wherein (x't,y't,z't) Is the eyebrow position coordinate obtained at the previous time, (x't+Δt,y't+Δt,z't+Δt) D is the first distance difference value, and is the eyebrow center position coordinate of the current moment.
S412, judging whether the first distance difference is larger than a center adjustment threshold value, if so, executing S413; if not, S414 is performed.
Optionally, the centering threshold is a length of a virtual world corresponding to 1cm of the real world.
And S413, taking the eyebrow center position coordinate obtained in the previous time as a starting point, taking the eyebrow center position coordinate at the current time as an ending point, performing linear interpolation according to a preset interpolation factor, and updating a midpoint coordinate of an interpolation result. Execution continues with S415.
S414, the center coordinates are kept unchanged, and S415 is continuously executed.
And S415, when the accumulated set time threshold exceeds the preset time, obtaining the average value of the left eye coordinate and the right eye coordinate of the watching user at the current moment to obtain a second position coordinate.
Optionally, the preset time is 0.05 second.
And S416, obtaining the mean value of the left-eye coordinate and the right-eye coordinate corresponding to the preset time before the current moment to obtain a first position coordinate.
And S417, determining a second distance difference value between the first position coordinate and the second position coordinate.
In particular, according to the formulaDetermining a second distance difference; where dist is the second distance difference, (xt, y)t,zt) Is a first position coordinate, (x)t+ΔT,yt+ΔT,zt+ΔT) Is the second position coordinate, (a)x,ay,az) Weight values for different directions. Optionally, (a)x,ay,az)=(1,0.01,0.001)。
S418, judging whether the second distance difference value is 0, if so, executing S419; if not, S420 is performed.
And S419, keeping the distance adjusting parameter unchanged, and returning to the step of executing S410.
S420, judging whether the second distance difference value is not smaller than a crosstalk threshold value, if so, executing S421; if not, S422 is performed.
Optionally, the crosstalk threshold is a length of a virtual world corresponding to a real world movement of 6 mm.
S421, overlapping and updating the distance adjusting parameters by adopting negative adjusting factors, and calculating view offset between views watched at different moments according to the distance difference and the crosstalk threshold; execution continues with S423.
Optionally, the negative regulator is-0.05. Wherein the distance adjusting parameter is 0.001-1.
In particular, according to the formulaDetermining a view offset, where HMD is the view offset, thrcIs the crosstalk threshold.
S422, adopting positive adjustment factors to superpose and update distance adjustment parameters, and setting the view offset as 0; execution continues with S423.
Wherein, in order to enhance the viewing fitness of the user, it is preferable to set the positive adjustment factor absolute value to be smaller than the negative adjustment factor absolute value. Optionally, the positive adjustment factor is + 0.01.
And S423, determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset.
Specifically, the viewpoint compensation amount is determined according to the formula VC ═ 1-sSA × P × HMD; wherein VC is the viewpoint compensation amount, sSA is the distance adjustment parameter, P is a compensation multiple, and HMD is the view offset.
Optionally, P is any integer of [0,10 ].
S424, determining a target viewpoint of a watching user, and overlaying and updating the target viewpoint by the viewpoint compensation amount so as to determine display layout according to the target viewpoint.
And S425, obtaining the target distance by multiplying the distance adjusting parameters and the maximum distance between the left virtual camera and the right virtual camera.
The maximum distance between the left virtual camera and the right virtual camera is equal to the pupil distance set by a viewing user. The pupil distance is any value between 5.5 cm and 7.5cm and is set by a watching user.
S426, adjusting the left virtual camera and the right virtual camera left and right around the midpoint coordinate to enable the distance between the left virtual camera and the right virtual camera to be equal to the target distance; the jump is performed S410.
EXAMPLE five
Fig. 5 is a structural diagram of a viewpoint compensation apparatus in a fifth embodiment of the present invention. The embodiment of the invention is suitable for the condition of watching the naked eye 3D display picture, and the device is realized by software and/or hardware and is specifically configured in electronic equipment for carrying out naked eye 3D display. The view point compensating apparatus shown in fig. 5 includes: a first position coordinate acquisition module 510, a second position coordinate acquisition module 520, a distance difference determination module 530, a view offset determination module 540, a distance adjustment parameter update module 550, and a viewpoint compensation amount determination module 560.
The first position coordinate obtaining module 510 is configured to obtain a first position coordinate of a tracked target at a current time;
a second position coordinate obtaining module 520, configured to obtain a second position coordinate of the tracked target at the corresponding time after the first delay;
a distance difference determination module 530 for determining a distance difference between the first location coordinate and the second location coordinate;
a view offset determining module 540, configured to determine, according to the first position coordinate, the second position coordinate, and the distance difference, a view offset between views viewed at different times;
a distance adjustment parameter updating module 550, configured to update a distance adjustment parameter according to the distance difference; wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera;
and a viewpoint compensation amount determining module 560, configured to determine a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount.
According to the embodiment of the invention, a first position coordinate and a second position coordinate of a tracked target at the current moment and the first delayed moment are respectively obtained through a first position coordinate obtaining module and a second position coordinate obtaining module; determining, by a distance difference determination module, a distance difference between the first position coordinate and the second position coordinate; determining view offset between views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference value through a view offset determination module; updating a distance adjusting parameter for determining the distance between the left virtual camera and the right virtual camera according to the distance difference value through a distance adjusting parameter updating module; and determining the viewpoint compensation amount according to the distance adjusting parameter and the view offset by a viewpoint compensation amount determining module so as to perform viewpoint compensation according to the viewpoint compensation amount. By adopting the technical scheme, the probability of occurrence of crosstalk phenomenon of left and right eye views caused by left and right movement of the tracked object when the naked eye 3D image is watched is reduced, and the watching experience of a user is improved.
Further, when the distance difference is not less than the crosstalk threshold, the view offset determining module 540 includes:
a moving direction determining unit configured to determine a moving direction of the tracked target according to magnitudes of a first component of the first position coordinate and a first component of the second position coordinate;
a moving distance determining unit, configured to determine a moving distance of the tracked target according to the distance difference and the crosstalk threshold;
and the view offset determining unit is used for determining the view offset according to the moving direction and the moving distance.
Further, the moving direction determining unit includes:
if the first component in the second position coordinate is smaller than the first component in the first position coordinate, determining that the moving direction is a negative direction;
and if the first component in the second position coordinate is not smaller than the first component in the first position coordinate, determining that the moving direction is a positive direction.
Further, the movement distance determination unit includes:
according to the formula hmd ═ (2 × dist/thr)c)0.5Determining the moving distance of the tracked target;
wherein hmd is the movement distance, dist is the distance difference, thrcIs the crosstalk threshold.
Further, the view offset determining module 540 is further configured to:
and when the distance difference is smaller than a crosstalk threshold value, determining that the view offset is 0.
Further, the distance adjustment parameter updating module 550 includes:
the negative regulation updating unit is used for updating the distance regulation parameter according to a negative regulation factor when the distance difference value is not less than the crosstalk threshold value;
the positive regulation updating unit is used for updating the distance regulation parameter according to a positive regulation factor when the distance difference value is nonzero and smaller than a crosstalk threshold value;
and the zero updating unit is used for keeping the distance adjusting parameter unchanged when the distance difference value is zero.
Further, the view compensation amount determining module 560 includes:
a viewpoint compensation amount determination unit for determining the viewpoint compensation amount according to a formula VC ═ 1-sSA × P × HMD;
wherein VC is the viewpoint compensation amount, sSA is the distance adjustment parameter, P is a compensation multiple, and HMD is the view offset.
Further, the apparatus further includes a camera midpoint coordinate adjusting module, specifically including:
a third position coordinate determining unit, configured to, after the obtaining of the first position coordinate of the tracked target at the current time, obtain a second delayed third position coordinate of the tracked target at a corresponding time;
a midpoint coordinate determination unit, configured to determine midpoint coordinates of the left virtual camera and the right virtual camera according to a preset interpolation function and an interpolation factor according to the first position coordinate and the third position coordinate;
a midpoint coordinate adjusting unit, configured to adjust position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate; wherein the second delay is less than the first delay.
Further, the device further comprises a camera distance adjusting module, and specifically comprises:
a target distance determining unit, configured to determine a target distance between the left virtual camera and the right virtual camera according to the distance adjusting parameter after updating the distance adjusting parameter according to the distance difference;
and the camera distance adjusting unit is used for adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate and the target distance.
Further, the apparatus further includes a target viewpoint obtaining module, specifically configured to:
and determining the current viewpoint of the tracked target, and superposing the viewpoint compensation quantity on the current viewpoint to obtain a target viewpoint so as to determine display layout according to the target viewpoint.
The viewpoint compensation device can execute the viewpoint compensation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the viewpoint compensation method.
EXAMPLE six
Fig. 6 is a schematic diagram of a hardware structure of an electronic device according to a sixth embodiment of the present invention, where the electronic device includes a display device 610, an eye tracking device 620, a processor 630, and a storage device 640.
The display device 610 is configured to display a left-eye view and a right-eye view in a certain arrangement;
the human eye tracking device 620 is used for acquiring a first position coordinate of a tracked target at the current moment; the first time delay unit is also used for acquiring a first position coordinate of the tracked target at a corresponding moment after the first time delay;
one or more processors 630;
a storage device 640 for storing one or more programs.
In fig. 6, a processor 630 is taken as an example, the processor 630 in the electronic device may be connected to the display device 610, the eye tracking device 620 and the storage device 640 through a bus or other means, and the processor 630 and the storage device 640 are also connected through a bus or other means, which is taken as an example in fig. 6.
In this embodiment, the processor 630 in the electronic device may determine a distance difference between the first position coordinate and the second position coordinate obtained by the eye tracking apparatus 620; view offset between views viewed at different times can also be determined according to the first position coordinate and the second position coordinate acquired by the human eye tracking device 620 and the distance difference determined by the processor 630 itself; the distance adjustment parameter for determining the distance between the left virtual camera and the right virtual camera in the storage device 640 may also be updated according to the distance difference, and the viewpoint compensation amount may also not be determined according to the distance adjustment parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount.
The storage device 640 in the electronic device may be used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the viewpoint compensation method in the embodiment of the present invention (for example, the first position coordinate acquiring module 510, the second position coordinate acquiring module 520, the distance difference determining module 530, the view offset determining module 540, the distance adjustment parameter updating module 550, and the viewpoint compensation amount determining module 560 shown in fig. 5). The processor 630 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the storage device 640, that is, implements the viewpoint compensation method in the above-described method embodiments.
The storage device 640 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like (the first position coordinates, the second position coordinates, the distance difference value, the view shift amount, the distance adjustment parameter, the viewpoint compensation amount, and the like in the above-described embodiments). Further, the storage 640 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage device 640 may further include memory located remotely from the processor 630, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Furthermore, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a viewpoint compensation apparatus, implements a viewpoint compensation method provided by the implementation of the present invention, and the method includes: acquiring a first position coordinate of a tracked target at the current moment; acquiring a second position coordinate of the tracked target at the corresponding moment after the first delay; determining a distance difference between the first position coordinate and the second position coordinate; determining view offset among views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference; updating a distance adjusting parameter according to the distance difference; wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera; and determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the viewpoint compensation method according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A view compensation method, comprising:
acquiring a first position coordinate of a tracked target at the current moment;
acquiring a second position coordinate of the tracked target at the corresponding moment after the first delay;
determining a distance difference between the first position coordinate and the second position coordinate;
determining view offset among views watched at different moments according to the first position coordinate, the second position coordinate and the distance difference;
updating a distance adjusting parameter according to the distance difference; wherein the distance adjustment parameter is used to determine a distance between the left virtual camera and the right virtual camera;
determining a viewpoint compensation amount according to the distance adjusting parameter and the view offset, so as to perform viewpoint compensation according to the viewpoint compensation amount;
when the distance difference is not less than a crosstalk threshold, determining, according to the first position coordinate, the second position coordinate, and the distance difference, a view offset between views viewed at different times, including:
determining the moving direction of the tracked target according to the magnitude of the first component of the first position coordinate and the first component of the second position coordinate;
determining the moving distance of the tracked target according to the distance difference and the crosstalk threshold;
determining the view offset according to the moving direction and the moving distance;
determining the moving distance of the tracked target according to the distance difference and the crosstalk threshold, including:
according to the formula hmd ═ (2 × dist/thr)c)0.5Determining the moving distance of the tracked target;
wherein hmd is the movement distance, dist is the distance difference, thrcIs the crosstalk threshold.
2. The method of claim 1, wherein determining the direction of movement of the tracked object based on the magnitude of the first component in the first and second location coordinates comprises:
if the first component in the second position coordinate is smaller than the first component in the first position coordinate, determining that the moving direction is a negative direction;
and if the first component in the second position coordinate is not smaller than the first component in the first position coordinate, determining that the moving direction is a positive direction.
3. The method of claim 1, wherein the view offset is determined to be 0 when the distance difference is less than a crosstalk threshold.
4. The method of claim 1, wherein updating a distance adjustment parameter based on the distance difference comprises:
if the distance difference is not smaller than the crosstalk threshold, updating the distance adjusting parameter according to a negative adjusting factor;
if the distance difference value is nonzero and smaller than a crosstalk threshold value, updating the distance adjusting parameter according to a positive adjusting factor;
and if the distance difference is zero, keeping the distance adjusting parameter unchanged.
5. The method of claim 1, wherein determining the view compensation amount according to the distance adjustment parameter and the view offset comprises:
determining the viewpoint compensation amount according to a formula VC of (1-sSA) × PxHMD;
wherein VC is the viewpoint compensation amount, sSA is the distance adjustment parameter, P is a compensation multiple, and HMD is the view offset.
6. The method according to any one of claims 1-5, further comprising, after said obtaining the first position coordinates of the tracked object at the current time,:
acquiring a third position coordinate of the tracked target at the corresponding moment after the second delay;
determining the midpoint coordinate of the left virtual camera and the right virtual camera according to the first position coordinate and the third position coordinate and a preset interpolation function and an interpolation factor;
adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate;
wherein the second delay is less than the first delay.
7. The method of claim 6, further comprising, after said updating a distance adjustment parameter based on said distance difference:
determining a target distance between the left virtual camera and the right virtual camera according to the distance adjustment parameter;
and adjusting the position coordinates of the left virtual camera and the right virtual camera according to the midpoint coordinate and the target distance.
8. The method of claim 1, wherein the performing the view compensation according to the view compensation amount comprises:
and determining the current viewpoint of the tracked target, and superposing the viewpoint compensation quantity on the current viewpoint to obtain a target viewpoint so as to determine display layout according to the target viewpoint.
9. An electronic device comprising an eye tracking apparatus and a display apparatus, further comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs being executable by the one or more processors to cause the one or more processors to implement a view compensation method as recited in any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a view compensation method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811119841.9A CN109104603B (en) | 2018-09-25 | 2018-09-25 | Viewpoint compensation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811119841.9A CN109104603B (en) | 2018-09-25 | 2018-09-25 | Viewpoint compensation method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109104603A CN109104603A (en) | 2018-12-28 |
CN109104603B true CN109104603B (en) | 2020-11-03 |
Family
ID=64867696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811119841.9A Active CN109104603B (en) | 2018-09-25 | 2018-09-25 | Viewpoint compensation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109104603B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112929643B (en) * | 2019-12-05 | 2022-06-28 | 北京芯海视界三维科技有限公司 | 3D display device, method and terminal |
CN111639017B (en) * | 2020-05-29 | 2024-05-07 | 京东方科技集团股份有限公司 | Method and device for measuring delay of eyeball tracking device and eyeball tracking system |
CN112686927B (en) * | 2020-12-31 | 2023-05-12 | 上海易维视科技有限公司 | Human eye position regression calculation method |
CN114449250A (en) * | 2022-01-30 | 2022-05-06 | 纵深视觉科技(南京)有限责任公司 | Method and device for determining viewing position of user relative to naked eye 3D display equipment |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011215499A (en) * | 2010-04-01 | 2011-10-27 | Nikon Corp | Display device and control method |
CN105376559B (en) * | 2015-11-05 | 2019-03-12 | 广东未来科技有限公司 | 3 d display device and its view-point correction method |
CN105391997B (en) * | 2015-11-05 | 2017-12-29 | 广东未来科技有限公司 | The 3d viewpoint bearing calibration of 3 d display device |
CN205282062U (en) * | 2015-11-05 | 2016-06-01 | 广东未来科技有限公司 | Stereoscopic display device |
CN106817511A (en) * | 2017-01-17 | 2017-06-09 | 南京大学 | A kind of image compensation method for tracking mode auto-stereoscopic display |
CN107124607A (en) * | 2017-05-08 | 2017-09-01 | 上海大学 | The naked-eye stereoscopic display device and method of a kind of combination visual fatigue detection |
CN107885325B (en) * | 2017-10-23 | 2020-12-08 | 张家港康得新光电材料有限公司 | Naked eye 3D display method and control system based on human eye tracking |
CN108174182A (en) * | 2017-12-30 | 2018-06-15 | 上海易维视科技股份有限公司 | Three-dimensional tracking mode bore hole stereoscopic display vision area method of adjustment and display system |
CN108282650B (en) * | 2018-02-12 | 2019-12-24 | 深圳超多维科技有限公司 | Naked eye three-dimensional display method, device and system and storage medium |
-
2018
- 2018-09-25 CN CN201811119841.9A patent/CN109104603B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109104603A (en) | 2018-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109104603B (en) | Viewpoint compensation method and device, electronic equipment and storage medium | |
US9204140B2 (en) | Display device and display method | |
CN108663799B (en) | Display control system and display control method of VR image | |
US9646383B2 (en) | Image processing apparatus, image capturing apparatus, and display apparatus | |
CN107105213B (en) | Stereoscopic display device | |
US20110149031A1 (en) | Stereoscopic image, multi-view image, and depth image acquisition apparatus and control method thereof | |
US9710955B2 (en) | Image processing device, image processing method, and program for correcting depth image based on positional information | |
US8803947B2 (en) | Apparatus and method for generating extrapolated view | |
US11122249B2 (en) | Dynamic covergence adjustment in augmented reality headsets | |
CN108632599B (en) | Display control system and display control method of VR image | |
CN109191506A (en) | Processing method, system and the computer readable storage medium of depth map | |
US20190139246A1 (en) | Information processing method, wearable electronic device, and processing apparatus and system | |
JP6915165B2 (en) | Equipment and methods for generating view images | |
TWI786107B (en) | Apparatus and method for processing a depth map | |
US20130208097A1 (en) | Three-dimensional imaging system and image reproducing method thereof | |
EP2750392A1 (en) | Visually-assisted stereo acquisition from a single camera | |
JP2013201688A (en) | Image processing apparatus, image processing method, and image processing program | |
KR20120133710A (en) | Apparatus and method for generating 3d image using asymmetrical dual camera module | |
JP5741353B2 (en) | Image processing system, image processing method, and image processing program | |
JP2015149547A (en) | Image processing method, image processing apparatus, and electronic apparatus | |
CN115202475A (en) | Display method, display device, electronic equipment and computer-readable storage medium | |
JP5838775B2 (en) | Image processing method, image processing system, and image processing program | |
KR20160041403A (en) | Method for gernerating 3d image content using information on depth by pixels, and apparatus and computer-readable recording medium using the same | |
CN115327782B (en) | Display control method and device, head-mounted display equipment and readable storage medium | |
CN108234983A (en) | A kind of three-dimensional imaging processing method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200407 Address after: 215634 north side of Chengang road and west side of Ganghua Road, Jiangsu environmental protection new material industrial park, Zhangjiagang City, Suzhou City, Jiangsu Province Applicant after: ZHANGJIAGANG KANGDE XIN OPTRONICS MATERIAL Co.,Ltd. Address before: 201203, room 5, building 690, No. 202 blue wave road, Zhangjiang hi tech park, Shanghai, Pudong New Area Applicant before: WZ TECHNOLOGY Inc. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |