CN114173109A - Watching user tracking method and device, electronic equipment and storage medium - Google Patents

Watching user tracking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114173109A
CN114173109A CN202210031712.4A CN202210031712A CN114173109A CN 114173109 A CN114173109 A CN 114173109A CN 202210031712 A CN202210031712 A CN 202210031712A CN 114173109 A CN114173109 A CN 114173109A
Authority
CN
China
Prior art keywords
target
user
image
determining
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210031712.4A
Other languages
Chinese (zh)
Inventor
张建伟
闫文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Vision Technology Nanjing Co ltd
Original Assignee
Deep Vision Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Vision Technology Nanjing Co ltd filed Critical Deep Vision Technology Nanjing Co ltd
Priority to CN202210031712.4A priority Critical patent/CN114173109A/en
Publication of CN114173109A publication Critical patent/CN114173109A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays

Abstract

The embodiment of the invention discloses a method and a device for tracking a watching user, electronic equipment and a storage medium. The method is applied to naked eye 3D display equipment and comprises the following steps: determining the number of target face objects from the target image; the method comprises the following steps that a target image is acquired from the viewing range of naked eye 3D display equipment by at least one image acquisition device; if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen. By executing the technical scheme provided by the embodiment of the invention, the human eye tracking object can be flexibly selected in a scene where multiple users participate in watching a 3D display picture, and the 3D experience of the multiple users can be improved.

Description

Watching user tracking method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of naked eye 3D display, in particular to a method and a device for tracking a watching user, electronic equipment and a storage medium.
Background
The naked eye 3D-based viewing user tracking method in the related art only tracks one user and enables the user to view ideal 3D display content. When a plurality of people are in front of the screen, the display is switched to multi-view display, but the presented 3D effect is not ideal; or the first tracked person is kept tracked, so that other persons appearing behind in front of the screen cannot see the ideal 3D display content, and the eye tracking is not switched to another viewer until the first tracked user leaves the tracking picture or the face thereof is temporarily twisted to an angle at which no face is detected.
The above eye tracking processing logic needs to keep other people away from the eye tracking area in order for a person to view desired 3D content. For a scene in which it is desired to temporarily switch a human eye tracking target among a plurality of persons in a human eye tracking area, the flexible and free use requirements cannot be satisfied.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for tracking a viewing user, an electronic device, and a storage medium, which can flexibly select an eye tracking object in a scene of a 3D display picture viewed by multiple users, and can improve 3D experience of multiple users.
In a first aspect, an embodiment of the present invention provides a viewing user tracking method, which is applied to a naked-eye 3D display device, and the method includes:
determining the number of target face objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects;
and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
In a second aspect, an embodiment of the present invention further provides an apparatus for tracking a viewing user, configured on a naked-eye 3D display device, where the apparatus includes:
a facial object number determination module for determining the number of target type facial objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
the target user determining module is used for determining a target user from candidate users corresponding to each target class face object if the number of the target class face objects is determined to be at least two;
and the display module is used for presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a viewing user tracking method as in any one of the embodiments of the invention.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the viewing user tracking method according to any one of the embodiments of the present invention.
The technical scheme provided by the embodiment of the invention is applied to naked eye 3D display equipment, and the number of target face objects is determined from a target image; the method comprises the following steps that a target image is acquired from the viewing range of naked eye 3D display equipment by at least one image acquisition device; if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen. By executing the technical scheme provided by the embodiment of the invention, the human eye tracking object can be flexibly selected in a scene where multiple users participate in watching a 3D display picture, and the 3D experience of the multiple users can be improved.
Drawings
FIG. 1 is a flow chart of a viewing user tracking method provided by an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating that a user selects a face area labeled 1 in a target image as an eye tracking target through a mouse according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a candidate user with reference number 1 as a target user for performing 3D display according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating that a user selects a face area labeled 3 in a target image as a human eye tracking target through a mouse according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a candidate user labeled as 3 as a target user for performing 3D display according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating that a user does not select any face region according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a multi-view display provided by an embodiment of the invention;
FIG. 8 is a flow chart of another viewing user tracking method provided by embodiments of the present invention;
FIG. 9 is a schematic diagram of a tracking device for a viewing user according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a viewing user tracking method according to an embodiment of the present invention, which may be performed by a viewing user tracking apparatus, which may be implemented by software and/or hardware, and which may be configured in an electronic device for viewing user tracking, such as a naked-eye 3D display device. The method is applied to a scene in which the human eye tracking object is switched to carry out naked eye 3D display. As shown in fig. 1, the technical solution provided by the embodiment of the present invention specifically includes:
s110, determining the number of the target face objects from the target image.
And acquiring the target image from the viewing range of the naked eye 3D display equipment by at least one image acquirer.
The naked eye 3D display device can be directly watched by naked eyes without wearing 3D glasses to realize a display device with a 3D effect, and the naked eye 3D display device can be a naked eye 3D display. The target image can be a human eye tracking monitoring picture, or the target image can also be a human face tracking monitoring picture, or the target image can also be a head tracking monitoring picture. The target image can be determined by splicing and de-duplicating the shot pictures of the corresponding visual angles which are simultaneously collected by at least one image collector in the viewing range of the naked eye 3D display equipment. The image collector can be a user position tracking sensor which is internally or externally arranged on the naked eye 3D display device. The user position tracking sensor can be any type of sensor such as a single camera or a plurality of cameras which can acquire the position of the user relative to the screen and the face information. Based on the imaging principle of the naked eye 3D display device, generally, the left eye image and the right eye image acquired by the viewer can be just displayed with a 3D display effect at a set position in front of the screen, the set of the positions is a viewing range, and the viewing range is a range in which the viewer can view the naked eye 3D display effect near the naked eye 3D display device. The range shown between the dotted lines in fig. 3, fig. 5, and fig. 7 is a viewing range of the naked eye 3D display device, and may be, for example, a range of an included angle of plus or minus 45 degrees of a normal line at the center of the screen, or a range of an included angle of plus or minus 60 degrees of a normal line at the center of the screen, where the viewing range may be within 50-100 centimeters of a viewer from the naked eye 3D display device, and the viewing range of the naked eye 3D display device may be set according to actual needs. The target face-like objects in the target image are not repeated, i.e., there is only one target face-like object for the same user in the target image. The target face object may be a human face, and the target face object may also be both eyes of the user. The method and the device can determine the number of the target type face objects from the target image, and frame the detected target type face object area in the target image through the rectangular frame, so that a user can accurately click the required target.
S120: and if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects.
For example, as shown in fig. 2, taking a target face object as a face as an example, the present scheme determines that the number of faces in a target image is 3, and if it is detected that a user selects a face region of a candidate user 1 in the target image by clicking a mouse or the like to issue a face selection instruction, determines a target user from candidate users corresponding to faces in a viewing range according to the face region selected by the user. For example, the scheme may determine that the face of the candidate user 1 is located at the middle position of the target image, and since the face corresponds to the candidate user one to one, the candidate user at the middle position in the viewing range of the naked eye 3D display device may be determined as the target user according to the determination, as shown in fig. 3. The candidate user can be a user who watches a naked eye 3D display picture in a watching range of the naked eye 3D display device.
S130: and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
The relative position of the target can be determined by the target image, and the relative position of the target can be set according to actual needs. For example, the target relative position may be coordinate information of the target user's head or both eyes or a single eye within a screen reference frame having the screen center as a coordinate origin. The target relative position may also be the angle of the target user relative to the screen center normal. According to the scheme, an ideal naked eye 3D display picture can be presented to the target user according to the target relative position of the target user relative to the screen. For example, by moving the relative position between the 3D light splitting device and the display screen in the naked-eye 3D display device, or moving the content displayed on the display screen, the target user is provided with the desired 3D display content.
It should be noted that, as shown in fig. 4, when it is necessary to switch to another candidate user to provide an ideal 3D display content, the user clicks another candidate user with the label number 3 in the target image by a mouse to switch the target of human eye tracking. According to the scheme, if the face area of the candidate user with the label of 3 in the target image is detected to be selected, the candidate user with the label of 3 is determined to be the target user, the eye tracking target is switched to the candidate user with the label of 3, and ideal 3D display content is presented for the candidate user, as shown in FIG. 5.
As shown in fig. 6, if a target face object selection instruction has not been received, or a cancel instruction to cancel the eye tracking targets of all candidate users is received, the present scheme may switch the display content to 2D display content or multi-view display content, as shown in fig. 7.
The technical scheme provided by the embodiment of the invention is applied to naked eye 3D display equipment, and the number of target face objects is determined from a target image; the method comprises the following steps that a target image is acquired from the viewing range of naked eye 3D display equipment by at least one image acquisition device; if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen. By executing the technical scheme provided by the embodiment of the invention, the human eye tracking object can be flexibly selected in a scene where multiple users participate in watching a 3D display picture, and the 3D experience of the multiple users can be improved.
Fig. 8 is a flowchart of a viewing user tracking method according to an embodiment of the present invention, which is optimized based on the foregoing embodiment. The same points can be referred to the above embodiments, and the embodiment is omitted here. As shown in fig. 8, the viewing user tracking method in the embodiment of the present invention may include:
s210: and determining at least one candidate image in the naked eye 3D display equipment viewing range according to each image collector.
If only one image collector exists in the viewing range of the naked eye 3D display device, the candidate images of all candidate users can be captured by one image collector in the viewing range. If a plurality of image collectors exist in the viewing range of the naked eye 3D display device, at least one candidate image in the viewing range is determined through each image collector, the shooting angles of the candidate images are inconsistent, and the candidate images of all candidate users can be captured through all the candidate images.
And S220, splicing and removing the duplicate of each candidate image according to a preset splicing rule to obtain a target image.
Because the image collector has a fixed shooting range, candidate images captured by adjacent image collectors may be repeated. I.e., a candidate user may be present in a candidate image acquired by an adjacent or non-adjacent image acquirer. The preset splicing rule can be an image fusion algorithm, image registration, image projection, image fusion and the like are carried out according to feature point information in each candidate image, and each candidate image is spliced and deduplicated according to the preset splicing rule to obtain a target image which contains all candidate user features and only has one candidate user number.
The number of target face objects is determined from the target image S230.
When the number of the candidate images is only one, the number of the target face objects determined in the target image is the number of the human faces detected in the candidate images. When the target image is composed of at least one candidate image of multiple visual angles acquired by multiple image collectors, the number of target face objects determined in the target image is to identify the face detected in each candidate image, and to take the people with the same identity in multiple candidate images as the same person, and finally to determine the number of target face objects.
In this embodiment, optionally, after determining the number of target face objects from the target image, the method further includes: if only one target face object in the target image is determined, determining a candidate user corresponding to the target face image as a target user; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
If the scheme determines that only one target face object exists in the target image, the scheme indicates that only one candidate user exists in the viewing range of the naked eye 3D display equipment, and the candidate user needs to view a 3D display picture, the candidate user corresponding to the target face object is determined to be the target user. And acquiring a target relative position of the candidate user relative to the screen through the target image, and presenting an ideal naked eye 3D display picture to the target user according to the target relative position.
Therefore, if only one target face object exists in the target image, the candidate user corresponding to the target face image is determined as the target user; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen. The method and the device can flexibly determine the human eye tracking object, and can improve the 3D use experience of the user.
In this embodiment, optionally, after determining the number of target face objects from the target image, the method further includes: if the target image is determined to have no target face object, controlling the naked eye 3D display equipment to present a 2D display picture; or controlling the naked eye 3D display equipment to present a multi-view display picture.
If the target face object is determined not to exist in the target image, the candidate user does not exist in the viewing range of the naked eye 3D display device, and the 3D picture does not need to be displayed for the specific candidate user. A 2D display screen or a multi-view display screen may be presented. The 2D display picture is a common picture, and the multi-view display picture may present different display contents to a plurality of different views within a viewing range of the naked eye 3D display device.
Therefore, if the target face object does not exist in the target image, controlling the naked eye 3D display equipment to present a 2D display picture; or controlling the naked eye 3D display device to present the multi-view display picture. The method and the device can ensure that the naked eye 3D display equipment displays the corresponding picture, improve the processing efficiency and save the computing resource.
S240, if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects.
In this embodiment, optionally, the determining a target user from candidate users corresponding to each target face object includes: determining a target face from the target image according to a target face selection instruction; and determining a target user according to the relative position of the target face in the target image and the relative position between the candidate users.
As shown in fig. 4, the target face selection instruction may be a user clicking a face region in the target image by using a mouse, the target face selection instruction may also be a user selecting a face region in the target image by using a shortcut key on a keyboard, and the target face selection instruction may also be another operation manner capable of selecting a face region in the target image. According to the scheme, the target face can be determined from the target image according to the target face selection instruction, and the target user is determined according to the relative position of the target face in the target image and the relative position between the candidate users. For example, in fig. 4, the user selects a face area labeled 3, the face area is on the rightmost side in the target image, the image collector and the viewing direction of the user are in a mirror image relationship, the arrangement of faces in the target image is exactly inverted with respect to the arrangement of candidate users in the viewing range, and therefore, the candidate user labeled 3 on the leftmost side in the viewing range can be determined as the target user, as shown in fig. 5.
Thus, the target face is determined from the target image according to the target face selection instruction; and determining the target user according to the relative position of the target face in the target image and the relative position between the candidate users. The target user can be determined according to the selection of the user on the face in the target image, the target user can be accurately determined, and reliable data support is provided for presenting a 3D display picture to the target user.
In a possible implementation manner, optionally, determining a target user from the candidate users corresponding to each target face object includes: determining target eyes from the target image according to a target eye selection instruction; and determining a target user according to the relative position of the target eyes in the target image and the relative position between the candidate users.
When the target image is the human eye tracking monitoring picture, the number of the two eyes in the target image can be determined, and the two-eye area is framed by the rectangular frame. The target binocular selection instruction may be that the user clicks a binocular region in the target image through a mouse, the target binocular selection instruction may also be that the user selects the binocular region in the target image through a shortcut key on a keyboard, and the target binocular selection instruction may also be other operation modes capable of selecting the binocular region in the target image. According to the scheme, the target eyes can be determined from the target image according to the target eye selection instruction, and the target users can be determined according to the relative positions of the target eyes in the target image and the relative positions of the candidate users. For example, the user selects a binocular region at an intermediate position in the target image, the binocular region being at an intermediate position in the target image, and thus, it may be determined that a candidate user corresponding to the binocular at the intermediate position in the viewing range is determined as the human eye tracking object, i.e., the target user. The scheme further presents a 3D display picture to the target user according to the determined target relative positions of the two eyes relative to the screen.
Thereby, determining a target binocular from the target image by selecting an instruction according to the target binocular; and determining a target user according to the relative position of the target eyes in the target image and the relative position between the candidate users. The method and the device can realize accurate determination of the target user according to the selection of the user on the two eyes in the target image, and provide reliable data support for presenting a 3D display picture to the target user.
In another optional embodiment, optionally, determining a target user from the candidate users corresponding to each target face object includes: and if the similarity between the gesture information of the candidate user and the preset tracking gesture information is determined to be greater than or equal to a preset threshold value according to the target image, determining that the candidate user is the target user.
The gesture information may be a user gesture, such as an "OK" gesture, or a pointing gesture, among others. The posture information may also be other postures, such as limb movements, etc., and the posture information may be set according to actual needs. The target image can comprise the posture information of the candidate user in the viewing range of the naked eye 3D display device. The preset tracking gesture information is a specific gesture of a pre-stored candidate user, such as an 'OK' gesture or a scissor-hand gesture. The preset threshold may be 95%, the preset threshold may be 99%, and the preset threshold may be set according to actual needs. According to the scheme, the key point information can be obtained by means of a human body key point detection algorithm, the similarity between the posture information of the candidate user and the preset tracking posture information is determined based on the key point information, and then the posture recognition is carried out according to the relative size relation between the similarity and the preset threshold value.
Illustratively, a specific gesture is taken as a condition for the candidate user to select the candidate user as the target user, for example, when the candidate user makes a gesture in an "OK" shape, a similarity is determined according to the gesture information in the target image and pre-stored preset tracking gesture information, and if the similarity is greater than or equal to a preset threshold, the target user is switched to the candidate user making the gesture. Or when the target user points to another candidate user through the pointing gesture, the target user is switched to the candidate user pointed by the gesture.
Therefore, if the similarity between the posture information of the candidate user and the preset tracking posture information is determined to be larger than or equal to the preset threshold value according to the target image, the candidate user is determined to be the target user. The method and the device can avoid the situation that a user manually selects and switches the eye tracking object for the target face object in the target image, so that the user can conveniently and flexibly switch the eye tracking object by himself, and the 3D use experience of the user can be improved.
And S250, presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
The technical scheme provided by the embodiment of the invention is applied to naked eye 3D display equipment, and at least one candidate image in a viewing range is determined according to each image collector; splicing and de-duplicating each candidate image according to a preset splicing rule to obtain a target image; determining the number of target face objects from the target image; the method comprises the following steps that a target image is acquired from the viewing range of naked eye 3D display equipment by at least one image acquisition device; if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen. By executing the technical scheme provided by the embodiment of the invention, the human eye tracking object can be flexibly selected in a scene where multiple users participate in watching a 3D display picture, and the 3D experience of the multiple users can be improved.
Fig. 9 is a schematic structural diagram of a viewing user tracking apparatus configured on a naked-eye 3D display device according to an embodiment of the present invention, where the apparatus may be configured in an electronic device for viewing user tracking. As shown in fig. 9, the apparatus includes:
a number of face objects determination module 310 for determining a number of target class face objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
a target user determining module 320, configured to determine a target user from candidate users corresponding to each target face object if it is determined that the number of the target face objects is at least two;
and the display module 330 is configured to present a naked eye 3D display picture to the target user according to the target relative position of the target user with respect to the screen.
Optionally, the apparatus further includes a target image determining module, configured to determine at least one candidate image in the viewing range according to each of the image collectors; and splicing and removing the duplication of each candidate image according to a preset splicing rule to obtain the target image.
Optionally, the target user determining module 320 includes a target face determining unit, configured to determine a target face from the target image according to a target face selection instruction; and the first target user determining unit is used for determining a target user according to the relative position of the target face in the target image and the relative position between the candidate users.
Optionally, the target user determining module 320 includes a target binocular determining unit, configured to determine target binocular from the target image according to a target binocular selecting instruction; and the second target user determining unit is used for determining a target user according to the relative position of the target eyes in the target image and the relative position between the candidate users.
Optionally, the target user determining module 320 is specifically configured to determine that the candidate user is the target user if it is determined that the similarity between the posture information of the candidate user and the preset tracking posture information is greater than or equal to a preset threshold according to the target image.
Optionally, the apparatus further includes a unique face object module, configured to determine, after determining the number of target face objects in the target image, if only one target face object in the target image is determined, determine that a candidate user corresponding to the target face image is a target user; and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
Optionally, the apparatus further includes a facial object absence module, configured to, after determining the number of target type facial objects in a target image, if it is determined that the target type facial objects do not exist in the target image, control the naked-eye 3D display device to present a 2D display screen; or controlling the naked eye 3D display equipment to present a multi-view display picture.
The device provided by the embodiment can execute the method for tracking the watching user executed by the naked eye 3D display equipment provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 10, the electronic device includes:
one or more processors 410, one processor 410 being exemplified in FIG. 10;
a memory 420;
the apparatus may further include: an input device 430 and an output device 440.
The processor 410, the memory 420, the input device 430 and the output device 440 of the apparatus may be connected by a bus or other means, and fig. 10 illustrates the connection by a bus as an example.
The memory 420, which is a non-transitory computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to a viewing user tracking method in embodiments of the present invention. The processor 410 executes various functional applications and data processing of the computer device by executing software programs, instructions and modules stored in the memory 420, namely, implementing a viewing user tracking method of the above-described method embodiments, namely:
determining the number of target face objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects;
and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
The memory 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 420 may optionally include memory located remotely from processor 410, which may be connected to the terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus. The output device 440 may include a display device such as a display screen.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a viewing user tracking method according to an embodiment of the present invention, that is:
determining the number of target face objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects;
and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A tracking method for a watching user is applied to naked eye 3D display equipment and is characterized by comprising the following steps:
determining the number of target face objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
if the number of the target face objects is determined to be at least two, determining a target user from candidate users corresponding to the target face objects;
and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
2. The method of claim 1, wherein the determining of the target image comprises:
determining at least one candidate image in the viewing range according to each image collector;
and splicing and removing the duplication of each candidate image according to a preset splicing rule to obtain the target image.
3. The method of claim 1, wherein determining a target user from the candidate users corresponding to each of the target face-like objects comprises:
determining a target face from the target image according to a target face selection instruction;
and determining a target user according to the relative position of the target face in the target image and the relative position between the candidate users.
4. The method of claim 1, wherein determining a target user from the candidate users corresponding to each of the target face-like objects comprises:
determining target eyes from the target image according to a target eye selection instruction;
and determining a target user according to the relative position of the target eyes in the target image and the relative position between the candidate users.
5. The method of claim 1, wherein determining a target user from the candidate users corresponding to each of the target face-like objects comprises:
and if the similarity between the gesture information of the candidate user and the preset tracking gesture information is determined to be greater than or equal to a preset threshold value according to the target image, determining that the candidate user is the target user.
6. The method of claim 1, after determining the number of target class face objects from the target image, further comprising:
if only one target face object in the target image is determined, determining a candidate user corresponding to the target face image as a target user;
and presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
7. The method of claim 1, after determining the number of target class face objects from the target image, further comprising:
if the target image is determined to have no target face object, controlling the naked eye 3D display equipment to present a 2D display picture; alternatively, the first and second electrodes may be,
and controlling the naked eye 3D display equipment to present a multi-view display picture.
8. A viewing user tracking device configured for use with a naked-eye 3D display device, comprising:
a facial object number determination module for determining the number of target type facial objects from the target image; the target image is acquired from the viewing range of the naked eye 3D display equipment by at least one image acquisition device;
the target user determining module is used for determining a target user from candidate users corresponding to each target class face object if the number of the target class face objects is determined to be at least two;
and the display module is used for presenting a naked eye 3D display picture to the target user according to the target relative position of the target user relative to the screen.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a viewing user tracking method according to any one of claims 1-7.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the viewing user tracking method according to any one of claims 1 to 7.
CN202210031712.4A 2022-01-12 2022-01-12 Watching user tracking method and device, electronic equipment and storage medium Pending CN114173109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210031712.4A CN114173109A (en) 2022-01-12 2022-01-12 Watching user tracking method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210031712.4A CN114173109A (en) 2022-01-12 2022-01-12 Watching user tracking method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114173109A true CN114173109A (en) 2022-03-11

Family

ID=80489223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210031712.4A Pending CN114173109A (en) 2022-01-12 2022-01-12 Watching user tracking method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114173109A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732210A (en) * 2015-03-17 2015-06-24 深圳超多维光电子有限公司 Target human face tracking method and electronic equipment
CN107833263A (en) * 2017-11-01 2018-03-23 宁波视睿迪光电有限公司 Feature tracking method and device
US20180373858A1 (en) * 2017-06-26 2018-12-27 International Business Machines Corporation System and method for continuous authentication using augmented reality and three dimensional object recognition
CN110740309A (en) * 2019-09-27 2020-01-31 北京字节跳动网络技术有限公司 image display method, device, electronic equipment and storage medium
CN113434046A (en) * 2021-06-28 2021-09-24 纵深视觉科技(南京)有限责任公司 Three-dimensional interaction system, method, computer device and readable storage medium
CN113438464A (en) * 2021-06-22 2021-09-24 纵深视觉科技(南京)有限责任公司 Switching control method, medium and system for naked eye 3D display mode

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732210A (en) * 2015-03-17 2015-06-24 深圳超多维光电子有限公司 Target human face tracking method and electronic equipment
US20180373858A1 (en) * 2017-06-26 2018-12-27 International Business Machines Corporation System and method for continuous authentication using augmented reality and three dimensional object recognition
CN107833263A (en) * 2017-11-01 2018-03-23 宁波视睿迪光电有限公司 Feature tracking method and device
CN110740309A (en) * 2019-09-27 2020-01-31 北京字节跳动网络技术有限公司 image display method, device, electronic equipment and storage medium
CN113438464A (en) * 2021-06-22 2021-09-24 纵深视觉科技(南京)有限责任公司 Switching control method, medium and system for naked eye 3D display mode
CN113434046A (en) * 2021-06-28 2021-09-24 纵深视觉科技(南京)有限责任公司 Three-dimensional interaction system, method, computer device and readable storage medium

Similar Documents

Publication Publication Date Title
EP3195595B1 (en) Technologies for adjusting a perspective of a captured image for display
WO2020015468A1 (en) Image transmission method and apparatus, terminal device, and storage medium
US9696798B2 (en) Eye gaze direction indicator
CN105787884A (en) Image processing method and electronic device
CN111693147A (en) Method and device for temperature compensation, electronic equipment and computer readable storage medium
CN112584076B (en) Video frame interpolation method and device and electronic equipment
US9531995B1 (en) User face capture in projection-based systems
CN111695516B (en) Thermodynamic diagram generation method, device and equipment
EP3062506B1 (en) Image switching method and apparatus
CN114706484A (en) Sight line coordinate determination method and device, computer readable medium and electronic equipment
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
CN115379125A (en) Interactive information sending method, device, server and medium
CN111914630A (en) Method, apparatus, device and storage medium for generating training data for face recognition
CN113055593A (en) Image processing method and device
CN109842791B (en) Image processing method and device
CN112651270A (en) Gaze information determination method and apparatus, terminal device and display object
CN114173109A (en) Watching user tracking method and device, electronic equipment and storage medium
Narducci et al. Enabling consistent hand-based interaction in mixed reality by occlusions handling
JP7293362B2 (en) Imaging method, device, electronic equipment and storage medium
CN114600162A (en) Scene lock mode for capturing camera images
CN111754543A (en) Image processing method, device and system
CN114356088B (en) Viewer tracking method and device, electronic equipment and storage medium
CN114115527B (en) Augmented reality AR information display method, device, system and storage medium
CN114449250A (en) Method and device for determining viewing position of user relative to naked eye 3D display equipment
CN112533071B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination