CN114449250A - Method and device for determining viewing position of user relative to naked eye 3D display equipment - Google Patents

Method and device for determining viewing position of user relative to naked eye 3D display equipment Download PDF

Info

Publication number
CN114449250A
CN114449250A CN202210114132.1A CN202210114132A CN114449250A CN 114449250 A CN114449250 A CN 114449250A CN 202210114132 A CN202210114132 A CN 202210114132A CN 114449250 A CN114449250 A CN 114449250A
Authority
CN
China
Prior art keywords
user
offset
determining
group
monitoring images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210114132.1A
Other languages
Chinese (zh)
Inventor
张建伟
闫文龙
夏正国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Vision Technology Nanjing Co ltd
Original Assignee
Deep Vision Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Vision Technology Nanjing Co ltd filed Critical Deep Vision Technology Nanjing Co ltd
Priority to CN202210114132.1A priority Critical patent/CN114449250A/en
Publication of CN114449250A publication Critical patent/CN114449250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking

Abstract

The embodiment of the invention discloses a method and a device for determining the viewing position of a user relative to naked eye 3D display equipment. Wherein, the method comprises the following steps: determining at least one set of target monitoring images including user face information from the at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on the naked eye 3D display equipment, and the number of the image acquisition modules is at least two; determining the offset of users in each group of target monitoring images; and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images. By executing the technical scheme provided by the embodiment of the invention, the eye tracking in a wider range can be realized, more flexible position selection can be provided for a user, and the 3D experience of the user can be improved.

Description

Method and device for determining viewing position of user relative to naked eye 3D display equipment
Technical Field
The embodiment of the invention relates to the technical field of naked eye 3D display, in particular to a method and a device for determining the viewing position of a user relative to naked eye 3D display equipment.
Background
In order to solve the problem that the viewing position of a naked eye 3D display is limited, a scheme of human eye tracking is proposed in the prior art, the position of a viewer relative to the naked eye 3D display is determined in real time, and display content is adjusted or the relative position of a light splitting element is adjusted, so that the viewer can always see ideal 3D content.
However, the tracking range determined by the eye tracking method in the related art is limited, and when the user is located at a position beyond the tracking range, the position of the user cannot be determined, and thus, ideal 3D display content cannot be presented to the user. In addition, when the user is near the edge of the tracking range, the accuracy of the determined position information may also decrease, thereby greatly reducing the user's 3D experience.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining the viewing position of a user relative to a naked eye 3D display device, which can realize wider human eye tracking, provide more flexible position selection for the user and improve the 3D experience of the user.
In a first aspect, an embodiment of the present invention provides a method for determining a viewing position of a user relative to a naked-eye 3D display device, where the method includes:
determining at least one set of target monitoring images including user face information from the at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
determining the offset of users in each group of target monitoring images;
and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
In a second aspect, an embodiment of the present invention further provides an apparatus for determining a viewing position of a user with respect to a naked-eye 3D display device, where the apparatus includes:
a target monitoring image determining unit for determining at least one set of target monitoring image including user face information from at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
the user offset determining module is used for determining the offset of users in each group of target monitoring images;
and the viewing position determining module is used for determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method for determining the viewing position of a user with respect to a naked-eye 3D display device according to any of the embodiments of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for determining the viewing position of a user with respect to a naked-eye 3D display device according to any one of the embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, at least one group of target monitoring images comprising user face information is determined from at least two groups of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on the naked eye 3D display equipment, and the number of the image acquisition modules is at least two; determining the offset of users in each group of target monitoring images; and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images. By executing the technical scheme provided by the embodiment of the invention, the eye tracking in a wider range can be realized, more flexible position selection can be provided for a user, and the 3D experience of the user can be improved.
Drawings
Fig. 1 is a flowchart of a method for determining a viewing position of a user relative to a naked-eye 3D display device according to an embodiment of the present invention;
fig. 2 is a flowchart of another method for determining a viewing position of a user relative to a naked-eye 3D display device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a user in position 1 provided by an embodiment of the present invention;
fig. 4(a) is a second view determined by the image capturing module 1 when the user is at the position 1 according to the embodiment of the present invention;
fig. 4(b) is a first view determined by the image capturing module 1 when the user is at the position 1 according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of a user in position 2 provided by an embodiment of the present invention;
fig. 6(a) is a second view determined by the image capturing module 1 when the user is at the position 2 according to the embodiment of the present invention;
fig. 6(b) is a first view determined by the image capturing module 1 when the user is at the position 2 according to the embodiment of the present invention;
fig. 7(a) is a second view determined by the image capturing module 2 when the user is in the position 2 according to the embodiment of the present invention;
fig. 7(b) is a first view determined by the image capturing module 2 when the user is at the position 2 according to the embodiment of the present invention;
FIG. 8 is a schematic diagram of a user in position 3 provided by an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a device for determining a viewing position of a user relative to a naked-eye 3D display device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a method for determining a viewing position of a user relative to a naked-eye 3D display device according to an embodiment of the present invention, where the method may be performed by an apparatus for determining a viewing position of a user relative to a naked-eye 3D display device, the apparatus may be implemented by software and/or hardware, and the apparatus may be configured in an electronic device for determining a viewing position of a user relative to a naked-eye 3D display device. The method is applied to a scene that a user watches a 3D picture through naked eye 3D display equipment. As shown in fig. 1, the technical solution provided by the embodiment of the present invention specifically includes:
at least one set of target monitoring images including user face information is determined from at least two sets of candidate monitoring images S110.
Each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on the naked eye 3D display equipment, and the number of the image acquisition modules is at least two.
Specifically, the naked-eye 3D display device may be a display device that is directly viewed by the naked eye without wearing 3D glasses to achieve a 3D effect, and the naked-eye 3D display device may be a naked-eye 3D display. The number of the image acquisition modules is set to be at least two groups, each image acquisition module can be arranged on naked eye 3D display equipment in a straight line shape, each image acquisition module can also be arranged on the naked eye 3D display equipment in a broken line shape, each image acquisition module can also be arranged on the naked eye 3D display equipment in an arc shape, and the arrangement form among the image acquisition modules can be set according to actual needs. The image acquisition module can include two image acquisition equipment, and the image acquisition module can include three image acquisition equipment, and the quantity of image acquisition equipment can set up according to actual need in the image acquisition module. And each image acquisition device in the image acquisition module outputs the shot candidate monitoring image through the same interface. The image acquisition device may be any type of sensor that can acquire facial information of a user, such as a single camera or multiple cameras.
The candidate monitoring image may be an eye tracking monitoring picture, or the candidate monitoring image may also be a face tracking monitoring picture, or the candidate monitoring image may also be a head tracking monitoring picture, and the candidate monitoring image may be set according to actual needs. The target monitor image may be a view of the candidate monitor image that includes information on the complete face of the user. The present solution may determine at least one set of target monitor images including user face information from at least two sets of candidate monitor images. If there is only one target monitoring image in a group of candidate monitoring images, the viewing position of the user relative to the screen cannot be determined according to the target monitoring image.
S120: and determining the offset of users in each group of target monitoring images.
For example, for each view in each set of target monitoring images, the present solution may determine the offset of the user in each view of the set of target monitoring images, and then use the absolute value of the average value of the offsets of the users in each view of the set of target monitoring images as the offset of the user in the set of target monitoring images.
For each view, the scheme may use the coordinate of the center point of the view as the coordinate origin, and then use the horizontal coordinate of the center of the face of the user in the view relative to the coordinate origin as the offset of the user in the view. Wherein, the signs of the horizontal coordinates of the two areas at the two sides of the center point of the view are opposite.
S130: and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
According to the scheme, the watching position of the user relative to the screen can be determined according to the offset of the user in each group of target monitoring images. For example, the scheme can determine the offset with the minimum value from the offset of the user in each group of target monitoring images, and determine the viewing position of the user relative to the screen according to the offset. For another example, according to the scheme, the first candidate viewing positions determined by the user based on each group of target monitoring images can be calculated according to the offset of the user in each group of target monitoring images, the offset of the user in each group of target monitoring images is used as a weight, and weighted average calculation is performed on the first candidate viewing positions to obtain the first actual viewing position of the user relative to the screen. For example, the scheme may further determine candidate offsets smaller than a preset offset from the offsets of the users in each group of target monitoring images, determine second candidate viewing positions of the users relative to the screen according to each candidate offset, and use an average value of the second candidate viewing positions as a second actual viewing position of the users relative to the screen. According to the offset of the user in each group of target monitoring images, the strategy for determining the watching position of the user relative to the screen can be set according to actual needs.
After the viewing position of the user relative to the screen is determined, the scheme can present an ideal naked eye 3D display picture to the user according to the viewing position of the user relative to the screen. For example, by moving the relative position between the 3D light splitting device and the display screen in the naked-eye 3D display apparatus, or moving the content displayed on the display screen, the user is provided with the desired 3D display content.
According to the technical scheme provided by the embodiment of the invention, at least one group of target monitoring images comprising user face information are determined from at least two groups of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on the naked eye 3D display equipment, and the number of the image acquisition modules is at least two; determining the offset of users in each group of target monitoring images; and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images. By executing the technical scheme provided by the embodiment of the invention, the eye tracking in a wider range can be realized, more flexible position selection can be provided for a user, and the 3D experience of the user can be improved.
Fig. 2 is a flowchart of a method for determining a viewing position of a user relative to a naked-eye 3D display device according to an embodiment of the present invention, and the embodiment is optimized based on the foregoing embodiment. The same points can be referred to the above embodiments, and the embodiment is omitted here. As shown in fig. 2, a method for determining a viewing position of a user relative to a naked-eye 3D display device in an embodiment of the present invention may include:
s210: at least one target monitor image including user face information is determined from the at least two sets of candidate monitor images.
Each group of target monitoring images comprises a first view and a second view.
Illustratively, as shown in fig. 3 and fig. 4(a) -4(b), when the user is at the position 1, only one set of target monitoring images determined by the image capturing module 1 includes two views, namely, a view 11 and a view 12. The set of candidate monitored images determined by the image capturing module 2 does not include face information of the user, and the set of candidate monitored images determined by the image capturing module 3 does not include face information of the user.
As shown in fig. 5 and fig. 6(a) -6(b), when the user moves from the position 1 to the position 2, the set of target monitoring images determined by the image capturing module 1 includes two views, namely view 11 and view 12. As shown in fig. 7(a) -7(b), the set of target monitoring images determined by the image capturing module 2 includes two views, i.e., a view 21 and a view 22. The set of candidate monitored images determined by the image capturing module 3 does not include face information of the user.
S220, determining a first offset of a user in a first view according to the first view in any group of target monitoring images; and determining a second offset of the user in a second view according to the second view in the set of target monitoring images.
For two views in any group of target monitoring images, the method and the device can determine a first offset of a user in a first view according to the first view, determine a second offset of the user in a second view according to the second view, and then determine the offsets of the user in the group of target monitoring images according to the first offset and the second offset.
In this embodiment, optionally, a first offset of a user in a first view is determined according to the first view in any group of target monitoring images; and determining a second offset of the user in a second view according to the second view in the set of target monitoring images, including: determining a first offset of the user in the first view based on coordinates of a center of the user's face in the first view relative to a center point of the first view; and, taking the coordinates of the center of the user's face in the second view relative to the center point of the second view as a second offset of the user in the second view.
The first view may be a left view in the set of target monitoring images, and the first view may also be a right view in the set of target monitoring images. The second view may be a left view in the set of target monitoring images and the second view may also be a right view in the set of target monitoring images. As shown in fig. 6(a) -6(b), taking a set of target monitoring images determined by the image capturing module 1 as an example, the coordinates of the center point of the view 11 are taken as the origin of coordinates, and the horizontal coordinates of the center of the face of the user in the view 11 relative to the center point of the view 11 are taken as the first offset x1L of the user in the view 11. The coordinates of the center point of view 12 are taken as the origin of coordinates, and the horizontal coordinates of the center of the user's face in view 12 with respect to the center point of view 12 are taken as the second offset x1R of the user in view 12. And determining the positive and negative values of the first offset and the second offset according to the side of the center of the face of the user relative to the origin of coordinates, wherein the signs of the horizontal coordinates of the two side areas of the view center are opposite.
For example, as shown in fig. 7(a) -7(b), taking a set of target monitoring images determined by the image capturing module 2 as an example, the coordinate of the center point of the view 21 is taken as the origin of coordinates, and the horizontal coordinate of the center of the face of the user in the view 21 with respect to the center point of the view 21 is taken as the first offset x2L of the user in the view 21. The coordinates of the center point of the view 22 are taken as the origin of coordinates, and the horizontal coordinates of the center of the user's face in the view 22 with respect to the center point of the view 22 are taken as the second offset x2R of the user in the view 22.
Thereby, by taking the coordinates of the center of the user's face in the first view with respect to the center point of the first view as the first offset of the user in the first view; and, taking the coordinates of the center of the user's face in the second view relative to the center point of the second view as a second offset of the user in the second view. It may be achieved that a reliable data source is provided for determining the offset of the user in the set of target surveillance images, which in turn may improve the accuracy of determining the viewing position of the user relative to the screen.
And S230, determining the offset of the user in the group of target monitoring images according to the first offset and the second offset.
According to the scheme, the absolute value of the average value of the first offset and the second offset can be used as the offset of the user in the group of target monitoring images.
In this embodiment, optionally, determining the offset of the user in the set of target monitoring images according to the first offset and the second offset includes: and determining the absolute value of the average value of the first offset and the second offset as the offset of the user in the group of target monitoring images.
As shown in fig. 6(a) -6(b), taking a group of target monitoring images determined by the image capturing module 1 as an example, the offset DC1 of the user in the group of target monitoring images can be expressed by a formula
Figure BDA0003495700860000091
A determination is made.
For example, as shown in fig. 7(a) -7(b), taking a set of target monitoring images determined by the image capturing module 2 as an example, the offset DC2 of the user in the set of target monitoring images can be expressed by a formula
Figure BDA0003495700860000092
A determination is made.
Thus, the absolute value of the average value of the first offset amount and the second offset amount is determined as the offset amount of the user in the set of target monitored images. The problem of large error caused by only considering single offset can be solved, the offset of the user in each group of target monitoring images can be accurately determined, and a reliable data source is further provided for determining the watching position of the user relative to the screen.
And S240, determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
In this embodiment, optionally, determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images includes: determining a target offset with a minimum value from the offsets of users in each group of target monitoring images; and determining the viewing position of the user relative to the screen according to the target offset.
When the number of the target monitoring image groups is multiple, the offset of the user in each group of target monitoring images can be respectively determined, then the offset with the minimum value, namely the target offset, is determined from each determined offset, and then the watching position of the user relative to the screen is determined according to the target offset. The processing procedure of determining the viewing position of the user with respect to the screen based on the offset amount may refer to the related art.
Thus, the target offset with the minimum value is determined from the offsets of the users in each group of target monitoring images; and determining the viewing position of the user relative to the screen according to the target offset. Determining the spatial position of the user relative to the screen based on the offset of the plurality of users may be accomplished.
In a possible embodiment, optionally, determining the viewing position of the user relative to the screen according to the offset of the user in each set of target monitoring images includes: calculating each first candidate viewing position determined by the user based on each group of target monitoring images according to the offset of the user in each group of target monitoring images; and taking the offset of the user in each group of target monitoring images as weight, and performing weighted average calculation on each first candidate viewing position to obtain a first actual viewing position of the user relative to the screen.
As shown in fig. 5, fig. 6(a) -6(b) and fig. 7(a) -7(b), the first candidate viewing position P1 of the user with respect to the screen can be determined according to the offset DC1 of the user in the set of target monitored images determined by the image capturing module 1, and the first candidate viewing position P2 of the user with respect to the screen can be determined according to the offset DC2 of the user in the set of target monitored images determined by the image capturing module 2. According to the scheme, the formula (P1 × DC1+ P2 × DC2)/(DC1+ DC2) is adopted, the offset of the user in each group of target monitoring images is used as a weight, weighted average calculation is carried out on each first candidate viewing position, and the first actual viewing position of the user relative to the screen is obtained.
Thereby, each first candidate viewing position determined by the user based on each group of target monitoring images is calculated according to the offset of the user in each group of target monitoring images; and taking the offset of the user in each group of target monitoring images as a weight, and carrying out weighted average calculation on each first candidate viewing position to obtain a first actual viewing position of the user relative to the screen. The spatial position of the user relative to the screen can be determined according to the offset of the plurality of users, and the spatial position of the user relative to the screen can be obtained accurately.
In another optional embodiment, optionally, determining the viewing position of the user relative to the screen according to the offset of the user in each set of target monitoring images includes: determining candidate offset smaller than preset offset from offset of users in each group of target monitoring images; determining a second candidate viewing position of the user relative to the screen according to each candidate offset; and taking the average value of the second candidate viewing positions as a second actual viewing position of the user relative to the screen.
The preset offset can be 10cm, the preset offset can be 15cm, and the preset offset can be set according to actual needs. As shown in fig. 8, when the user is at the position 3, the offset DC1 of the user in the set of target monitoring images determined by the image capturing module 1 is 8cm, the offset DC2 of the user in the set of target monitoring images determined by the image capturing module 2 is 12cm, and the offset DC3 of the user in the set of target monitoring images determined by the image capturing module 3 is 9 cm. Assuming that the preset offset is 10cm, DC1 and DC3 are smaller than the preset offset, and DC1 and DC3 are determined as candidate offsets. Then the second candidate viewing position of the user relative to the screen, determined from DC1, is P1, the second candidate viewing position of the user relative to the screen, determined from DC3 is P3, and the average of P1 and P3 is determined as the second actual viewing position of the user relative to the screen.
Therefore, candidate offset smaller than the preset offset is determined from the offset of the users in each group of target monitoring images; determining a second candidate viewing position of the user relative to the screen according to each candidate offset; and taking the average value of the second candidate viewing positions as a second actual viewing position of the user relative to the screen. The spatial position of the user relative to the screen can be determined according to the offset of the plurality of users, and the spatial position of the user relative to the screen can be obtained accurately.
According to the technical scheme provided by the embodiment of the invention, at least one group of target monitoring images comprising user face information are determined from at least two groups of candidate monitoring images; determining a first offset of a user in a first view according to the first view in any group of target monitoring images; determining a second offset of the user in the second view according to the second view in the group of target monitoring images; determining the offset of the user in the group of target monitoring images according to the first offset and the second offset; and determining the watching position of the user relative to the screen according to the offset of the user in each group of target monitoring images. By executing the technical scheme provided by the embodiment of the invention, the eye tracking in a wider range can be realized, more flexible position selection can be provided for a user, and the 3D experience of the user can be improved.
Fig. 9 is a schematic structural diagram of an apparatus for determining a viewing position of a user, which is disposed on a naked-eye 3D display device, relative to the naked-eye 3D display device according to an embodiment of the present invention, where the apparatus may be disposed in an electronic device for determining the viewing position of the user relative to the naked-eye 3D display device. As shown in fig. 9, the apparatus includes:
a target monitoring image determining module 310 for determining at least one set of target monitoring images including user face information from at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
a user offset determination module 320, configured to determine offsets of users in each group of target monitored images;
and the viewing position determining module 330 is configured to determine a viewing position of the user relative to the screen according to the offset of the user in each set of target monitoring images.
Optionally, each set of target monitoring images includes a first view and a second view; correspondingly, the user offset determining module 320 includes a view offset determining unit, configured to determine a first offset of a user in a first view according to the first view in any group of target monitoring images; determining a second offset of the user in a second view according to the second view in the group of target monitoring images; and the user offset determining unit is used for determining the offset of the user in the group of target monitoring images according to the first offset and the second offset.
Optionally, the view offset determining unit is specifically configured to use coordinates of a center of the user's face in the first view with respect to a center point of the first view as a first offset of the user in the first view; and, taking the coordinates of the center of the user's face in the second view relative to the center point of the second view as a second offset of the user in the second view.
Optionally, the user offset determining unit is specifically configured to determine an absolute value of an average value of the first offset and the second offset as an offset of a user in the group of target monitored images.
Optionally, the viewing position determining module 330 includes a target offset determining unit, configured to determine a target offset with a minimum value from offsets of users in each set of target monitoring images; and the viewing position determining unit is used for determining the viewing position of the user relative to the screen according to the target offset.
Optionally, the viewing position determining module 330 includes a first candidate viewing position determining unit, configured to calculate, according to an offset of a user in each group of target monitoring images, each first candidate viewing position determined by the user based on each group of target monitoring images; and the first actual viewing position determining unit is used for taking the offset of the user in each group of target monitoring images as weight and carrying out weighted average calculation on each first candidate viewing position to obtain a first actual viewing position of the user relative to the screen.
Optionally, the viewing position determining module 330 includes a candidate offset determining unit, configured to determine a candidate offset smaller than a preset offset from offsets of users in each group of target monitored images; a second candidate viewing position determining unit for determining a second candidate viewing position of the user with respect to the screen based on each candidate offset; and a second actual viewing position determining unit configured to use an average value of the second candidate viewing positions as a second actual viewing position of the user with respect to the screen.
The device provided by the embodiment can execute the method for determining the viewing position of the user relative to the naked-eye 3D display equipment provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 10, the electronic device includes:
one or more processors 410, one processor 410 being exemplified in FIG. 10;
a memory 420;
the apparatus may further include: an input device 430 and an output device 440.
The processor 410, the memory 420, the input device 430 and the output device 440 of the apparatus may be connected by a bus or other means, and fig. 10 illustrates the connection by a bus as an example.
The memory 420 is a non-transitory computer readable storage medium, and may be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the method for determining the viewing position of a user with respect to a naked-eye 3D display device in the embodiment of the present invention. The processor 410 executes various functional applications and data processing of the computer device by executing the software programs, instructions and modules stored in the memory 420, namely, a method for determining the viewing position of a user relative to a naked-eye 3D display device, which implements the above method embodiments, that is:
determining at least one set of target monitoring images including user face information from the at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
determining the offset of users in each group of target monitoring images;
and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
The memory 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 420 may optionally include memory located remotely from processor 410, which may be connected to the terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus. The output device 440 may include a display device such as a display screen.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for determining a viewing position of a user with respect to a naked-eye 3D display device, that is, the method includes:
determining at least one set of target monitoring images including user face information from the at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
determining the offset of users in each group of target monitoring images;
and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for determining a viewing position of a user relative to a naked eye 3D display device is characterized by comprising the following steps:
determining at least one set of target monitoring images including user face information from the at least two sets of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
determining the offset of users in each group of target monitoring images;
and determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
2. The method of claim 1, wherein each set of target surveillance images includes a first view and a second view;
correspondingly, determining the offset of the users in each group of target monitoring images comprises the following steps:
determining a first offset of a user in a first view according to the first view in any group of target monitoring images; determining a second offset of the user in a second view according to the second view in the group of target monitoring images;
and determining the offset of the user in the group of target monitoring images according to the first offset and the second offset.
3. The method of claim 2, wherein determining a first offset of a user in a first view from the first view in any set of target surveillance images; and determining a second offset of the user in a second view according to the second view in the set of target monitoring images, including:
determining a first offset of the user in the first view based on coordinates of a center of the user's face in the first view relative to a center point of the first view;
and, taking the coordinates of the center of the user's face in the second view relative to the center point of the second view as a second offset of the user in the second view.
4. The method of claim 2, wherein determining the offset of the user in the set of target monitored images according to the first offset and the second offset comprises:
and determining the absolute value of the average value of the first offset and the second offset as the offset of the user in the group of target monitoring images.
5. The method of claim 1, wherein determining the viewing position of the user relative to the screen based on the offset of the user in each set of target monitored images comprises:
determining a target offset with a minimum value from the offsets of users in each group of target monitoring images;
and determining the viewing position of the user relative to the screen according to the target offset.
6. The method of claim 1, wherein determining the viewing position of the user relative to the screen based on the offset of the user in each set of target monitored images comprises:
calculating each first candidate viewing position determined by the user based on each group of target monitoring images according to the offset of the user in each group of target monitoring images;
and taking the offset of the user in each group of target monitoring images as weight, and performing weighted average calculation on each first candidate viewing position to obtain a first actual viewing position of the user relative to the screen.
7. The method of claim 1, wherein determining the viewing position of the user relative to the screen based on the offset of the user in each set of target monitoring images comprises:
determining candidate offset smaller than preset offset from offset of users in each group of target monitoring images;
determining a second candidate viewing position of the user relative to the screen according to each candidate offset;
and taking the average value of the second candidate viewing positions as a second actual viewing position of the user relative to the screen.
8. An apparatus for determining a viewing position of a user with respect to a naked eye 3D display device, comprising:
a target monitoring image determining module for determining at least one group of target monitoring images including user face information from at least two groups of candidate monitoring images; each group of candidate monitoring images are acquired by a group of image acquisition modules arranged on naked eye 3D display equipment, and the number of the image acquisition modules is at least two;
the user offset determining module is used for determining the offset of users in each group of target monitoring images;
and the viewing position determining module is used for determining the viewing position of the user relative to the screen according to the offset of the user in each group of target monitoring images.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of determining a viewing position of a user with respect to a naked eye 3D display device as claimed in any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of determining a viewing position of a user with respect to a naked-eye 3D display device according to any one of claims 1 to 7.
CN202210114132.1A 2022-01-30 2022-01-30 Method and device for determining viewing position of user relative to naked eye 3D display equipment Pending CN114449250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210114132.1A CN114449250A (en) 2022-01-30 2022-01-30 Method and device for determining viewing position of user relative to naked eye 3D display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210114132.1A CN114449250A (en) 2022-01-30 2022-01-30 Method and device for determining viewing position of user relative to naked eye 3D display equipment

Publications (1)

Publication Number Publication Date
CN114449250A true CN114449250A (en) 2022-05-06

Family

ID=81371936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210114132.1A Pending CN114449250A (en) 2022-01-30 2022-01-30 Method and device for determining viewing position of user relative to naked eye 3D display equipment

Country Status (1)

Country Link
CN (1) CN114449250A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253778A (en) * 2005-09-29 2008-08-27 株式会社东芝 Three-dimensional image display device, three-dimensional image display method, and computer program product for three-dimensional image display
US20160014389A1 (en) * 2013-03-21 2016-01-14 Panasonic Intellectual Property Management Co., Ltd. Image processing method and image processing device
US20160165205A1 (en) * 2014-12-03 2016-06-09 Shenzhen Estar Technology Group Co.,Ltd Holographic displaying method and device based on human eyes tracking
CN108111838A (en) * 2017-12-25 2018-06-01 上海玮舟微电子科技有限公司 A kind of bore hole 3D display correcting fixture and bearing calibration
CN108419072A (en) * 2018-01-17 2018-08-17 深圳市绚视科技有限公司 A kind of bearing calibration of bore hole 3D display screen and means for correcting, storage medium
CN110263657A (en) * 2019-05-24 2019-09-20 亿信科技发展有限公司 A kind of human eye method for tracing, device, system, equipment and storage medium
WO2020177132A1 (en) * 2019-03-07 2020-09-10 深圳市立体通科技有限公司 Automatic calibration method for image arrangement of naked 3d display screen, and electronic device
CN113411564A (en) * 2021-06-21 2021-09-17 纵深视觉科技(南京)有限责任公司 Method, device, medium and system for measuring human eye tracking parameters

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253778A (en) * 2005-09-29 2008-08-27 株式会社东芝 Three-dimensional image display device, three-dimensional image display method, and computer program product for three-dimensional image display
US20160014389A1 (en) * 2013-03-21 2016-01-14 Panasonic Intellectual Property Management Co., Ltd. Image processing method and image processing device
US20160165205A1 (en) * 2014-12-03 2016-06-09 Shenzhen Estar Technology Group Co.,Ltd Holographic displaying method and device based on human eyes tracking
CN108111838A (en) * 2017-12-25 2018-06-01 上海玮舟微电子科技有限公司 A kind of bore hole 3D display correcting fixture and bearing calibration
CN108419072A (en) * 2018-01-17 2018-08-17 深圳市绚视科技有限公司 A kind of bearing calibration of bore hole 3D display screen and means for correcting, storage medium
WO2020177132A1 (en) * 2019-03-07 2020-09-10 深圳市立体通科技有限公司 Automatic calibration method for image arrangement of naked 3d display screen, and electronic device
CN110263657A (en) * 2019-05-24 2019-09-20 亿信科技发展有限公司 A kind of human eye method for tracing, device, system, equipment and storage medium
CN113411564A (en) * 2021-06-21 2021-09-17 纵深视觉科技(南京)有限责任公司 Method, device, medium and system for measuring human eye tracking parameters

Similar Documents

Publication Publication Date Title
US10694175B2 (en) Real-time automatic vehicle camera calibration
CN110059623B (en) Method and apparatus for generating information
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
CN112307323B (en) Information pushing method and device
CN112802206A (en) Roaming view generation method, device, equipment and storage medium
CN109816791B (en) Method and apparatus for generating information
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN114449250A (en) Method and device for determining viewing position of user relative to naked eye 3D display equipment
US8755819B1 (en) Device location determination using images
CN113992860A (en) Behavior recognition method and device based on cloud edge cooperation, electronic equipment and medium
CN113703704A (en) Interface display method, head-mounted display device and computer readable medium
US20220345621A1 (en) Scene lock mode for capturing camera images
CN111263115A (en) Method and apparatus for presenting images
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN111754543A (en) Image processing method, device and system
CN114173109A (en) Watching user tracking method and device, electronic equipment and storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN116193246A (en) Prompt method and device for shooting video, electronic equipment and storage medium
CN114356088B (en) Viewer tracking method and device, electronic equipment and storage medium
CN111510370B (en) Content processing method and device, computer medium and electronic equipment
CN112991147B (en) Image processing method, device, electronic equipment and computer readable storage medium
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
CN117934769A (en) Image display method, device, electronic equipment and storage medium
CN117152479A (en) Image processing and labeling method and device, electronic equipment and storage medium
CN117906634A (en) Equipment detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination