WO2019062056A1 - 一种智能投影方法、系统及智能终端 - Google Patents
一种智能投影方法、系统及智能终端 Download PDFInfo
- Publication number
- WO2019062056A1 WO2019062056A1 PCT/CN2018/081147 CN2018081147W WO2019062056A1 WO 2019062056 A1 WO2019062056 A1 WO 2019062056A1 CN 2018081147 W CN2018081147 W CN 2018081147W WO 2019062056 A1 WO2019062056 A1 WO 2019062056A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- angle
- head
- observer
- camera
- projected object
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Definitions
- the present application relates to the field of intelligent projection, and in particular to an intelligent projection method, system and intelligent terminal.
- the 3D Stereo display technology is one of the current hot technologies, and the left and right eye signals are separated to realize stereoscopic image display on the display platform.
- Stereoscopic display is one of the ways to realize immersive interaction of VR virtual reality.
- 3D stereoscopic display can display the depth, level and position of the projected object. The observer can understand the actual distribution of the projected object more intuitively, so as to have a more comprehensive understanding. Project an object or display content. However, the observer is not stationary. When the observer moves the position, the virtual projection object must be deflected accordingly, so that the observer can clearly view the stereoscopic image content at other angles, and the viewing is more comfortable.
- Chinese patent CN104155840A discloses a 360° full parallax three-dimensional display device based on a high speed projector, which transmits images to the high speed projector according to the 3D scene information to be displayed and the position information of each observer, so that the observer is Accurate image information can be seen in different positions, but in order to ensure that the observer can always see the correct image, the image must be drawn in real time according to the position of the observer. The more complicated the 3D scene, the larger the amount of calculation required.
- the image seen by each observer of each eye needs to reach a refresh rate of 60 Hz, that is, the frame rate of the image provided to each observer needs to be 120 Hz, then if there is For N observers, the frame rate to be output is N*120Hz.
- the location tracking device tracks the observer's position, it also requires a large amount of computation. Therefore, a high-performance computer or graphics workstation is required to meet the requirements.
- the present application provides a simple and convenient way to view the projected object in an all-round manner through the coordinates of the face or the eye position and to view the image of the projected object.
- An intelligent projection method and system thereof by using the projection method of the present application, when the observer is in different positions, the projection object is deflected in the direction in which the observer faces the projection screen, so that the observer does not need to be limited. A location that enhances the user experience.
- the technical problem to be solved by the embodiments of the present application is to provide an intelligent projection method, system, and intelligent terminal, which can extract a head feature point, a face or an eye rotation or movement by tracking a user's face or eye, and then virtual projection The object is deflected, and the viewing angle of the projected object relative to the observer is obtained by establishing a simple and clear mathematical model.
- a technical solution adopted by the embodiment of the present application is to provide an intelligent projection method, which is applied to an intelligent terminal, and the virtual projection object is deflected when the observer's head feature point moves or rotates.
- the head key positioning point being obtained from an observer image taken by the camera;
- the angle of view a2 between the observer's head and the projected object is determined according to the angle a1 between the observer's head and the camera, and the position of the observer at the projection plane, thereby determining the viewing angle of the projected object relative to the observer.
- the obtaining the key positioning points of the observer's head includes:
- the method further includes:
- the determining the position of the observer in the plane of the camera and the plane of the projection comprises:
- the angle a1 includes an angle x_angle_c between the head and the camera in the X-axis direction, and an angle y_angle_c between the head and the camera in the Y-axis direction;
- x_angle_c is the angle between the head and the camera in the X-axis direction
- y_angle_c is the angle between the head and the camera in the Y-axis direction
- C point indicates the position of the head
- O point indicates the position of the camera
- d is the head and The specific distance AO of the camera
- dpixel is the actual distance between each pixel
- the unit is cm/pixel
- (x c , y c ) is the coordinate of C point in the image
- (x a , y a ) is the point A in the image
- an angle a2 between the observer's head and the projected object includes an angle x_angle_o between the head and the projected object in the X-axis direction, and an angle y_angle_o between the head and the projected object in the Y-axis direction, the determining observation
- the formula for the angle between the head and the projected object is as follows:
- h is the height of the image
- y is the projection distance of the head in the Y direction of the image
- x_angle_c is the angle between the camera and the head in the X direction
- y_angle_c is the angle between the camera and the head in the Y direction
- x_angle_o is The angle between the head and the projected object in the X direction
- y_angle_o is the angle of the head and the projected object in the Y direction
- k 0 and k 1 are fixed coefficients.
- the method for determining an angle a2 between an observer's head and a projected object further includes:
- the geometric coordinates of the height of the observer are fixed to determine the angle between the head and the projected object.
- the included angle a2 includes an angle x_angle_o between the head and the projected object in the X-axis direction, and an angle y_angle_o between the head and the projected object in the Y-axis direction, and the formula for determining the angle between the observer's head and the projected object as follows:
- y_angle is the angle at which the camera axis is tilted in the Y direction
- y_angle_c is the angle between the head and the camera in the direction of the axis Y
- y_angle_o is the angle formed by the head and the projected object in the direction of the axis Y
- H is the height of the observer
- L2 is The distance between the head and the projected object
- L1 is the distance between the camera and the projected object
- h1 is the height of the projected object
- h2 is the height of the camera
- x_angle_c is the angle between the head and the camera in the direction of the axis X
- x_angle_o is the head and projection The angle at which the object forms in the direction of the axis X.
- an intelligent projection system which is applied to an intelligent terminal, and the virtual projection object is deflected when the observer's head feature point moves or rotates.
- the system includes:
- a key positioning point acquiring unit configured to acquire a key positioning point of the observer head, where the key key positioning point is obtained from an observer image captured by the camera;
- a plane determining unit configured to determine, according to the head key positioning point, a position of the observer in a camera plane and a projection plane;
- An angle determining unit configured to determine an angle a1 between the observer's head and the camera according to the position of the observer in the plane of the camera;
- a viewing angle determining unit configured to determine an angle a2 between the observer's head and the projected object according to the angle a1 between the observer's head and the camera, and the position of the observer at the projection plane, thereby determining the projected object relative to the observer Observation angle.
- the key positioning point obtaining unit is specifically configured to:
- an intelligent terminal including:
- At least one processor and,
- the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the method of any of the above.
- another technical solution adopted by the embodiment of the present application is to provide a non-transitory computer readable storage medium, where the computer readable storage medium stores computer executable instructions, when the computer can When the execution instruction is executed by the intelligent terminal, the smart terminal is caused to perform the method of any of the above.
- An advantageous effect of the embodiment of the present application is: acquiring a key positioning point of an observer head, wherein the key key positioning point is obtained from an observer image captured by a camera; and determining, according to the key positioning point of the head, the observer is a position of the camera plane and the projection plane; determining an angle a1 between the observer's head and the camera according to the position of the observer in the plane of the camera; according to the angle a1 between the observer's head and the camera, and the observer is projecting The position of the plane determines the angle a2 between the observer's head and the projected object to determine the viewing angle of the projected object relative to the observer.
- the method is simple in calculation, and does not need to be put into high-performance computer equipment, which is convenient for use by various user groups; the algorithm program is smooth and accurate, and can enable the observer to understand the display content of the virtual projection object in all aspects at different positions; the virtual projection of the stereoscopic display
- the object can be deflected as the position of the observer moves, and the deflection angle can be up to 90°, which alleviates the visual fatigue to a certain extent, which is beneficial for the observer to view the projected object at the optimal viewing angle every time, and can also clearly and accurately watch.
- Multiple stereoscopic projection objects are simple in calculation, and does not need to be put into high-performance computer equipment, which is convenient for use by various user groups; the algorithm program is smooth and accurate, and can enable the observer to understand the display content of the virtual projection object in all aspects at different positions; the virtual projection of the stereoscopic display
- the object can be deflected as the position of the observer moves, and the deflection angle can be up to 90
- FIG. 2 is a schematic diagram showing the positional relationship between an observer in a camera plane and a projection plane according to an embodiment of the present application
- FIG. 3 is a flowchart of obtaining a key positioning point of an observer's head according to an embodiment of the present application
- FIG. 4 is a flowchart of another key positioning point for acquiring an observer's head according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of a positional relationship between a camera and a human face when the center optical axis of the camera is parallel to the ground according to an embodiment of the present application;
- FIG. 6 is a mathematical geometric model diagram of a face, a camera, and a projected object in the y direction when the height of the human body is fixed according to an embodiment of the present application;
- FIG. 7 is a mathematical geometric model diagram of a face, a camera, and a projected object in the x direction according to an embodiment of the present application;
- FIG. 8 is a schematic diagram of an intelligent projection system according to an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present application.
- the three-dimensional stereoscopic (3D Stereo) display technology can be divided into a glasses type and a naked eye type.
- the glasses type is taken as an example, and the observer can clearly see the projected object image projected by the projector by wearing the 3D glasses.
- the virtual projection object is stationary or in motion can change the viewing angle of the projection object relative to the observer, that is, the projection angle of view, without causing the observer to feel the flickering feeling.
- the drawings of the embodiments of the present application are described by taking the face movement as an example. It is worth noting that the present application does not limit the face or eye tracking recognition of the user, and determines the projection according to the position of the face, the camera, and the projected object.
- the specific method used by the object relative to the observer's viewing angle, and the method of using the image acquisition to collect the user image, and according to the angle relationship between the face, the camera and the projected object, the projection object is deflected by the face movement Yes.
- FIG. 1 is a flowchart of an intelligent projection method according to an embodiment of the present disclosure, where the method includes:
- S11 acquiring a key positioning point of the observer head, where the key key positioning point is obtained from an observer image captured by the camera;
- the facial features are innate with other biological characteristics of the human body (such as fingerprints, irises, etc.), and its uniqueness and good characteristics that are not easily replicated provide the necessary premise for identification.
- the key position of the observer's head is selected from the head feature points according to the image of the face, such as visual features, pixel statistical features, face image variation coefficient features, face image algebra features, histogram features, colors Feature, template feature and structural feature are selected.
- the position and size of the face are first calibrated in the image.
- the key points of the observer's head are selected according to preset rules and algorithms. Generally, multiple anchor points are selected for judgment. Whether the position of the face changes, improving accuracy and feasibility.
- S12 determining, according to the key positioning point of the head, a position of the observer on a plane of the camera and a plane of the projection;
- a human face, a camera, and a projected object together form a space, and the observer's face is projected onto the camera plane and the projection plane, and each object in the plane coordinate is The position and angle relationship are used to calculate the viewing angle of the projected object relative to the observer.
- S14 Determine an angle of view a2 between the observer's head and the projected object according to the angle a1 between the observer's head and the camera, and the position of the observer at the projection plane, thereby determining the viewing angle of the projected object relative to the observer.
- the viewing angle of the projected object relative to the observer is uniquely determined, and the virtual projected object follows
- the face is deflected by movement, and the projected object is deflected to the target position according to the calculated viewing angle of the projected object relative to the observer, and is generally deflected by a maximum of about 90 degrees.
- FIG. 3 is a flowchart of obtaining a key positioning point of an observer's head according to an embodiment of the present application.
- the obtaining the key positioning points of the observer's head includes:
- a multi-light source face recognition technology based on the active near-infrared image can be used.
- the implementation of the tracking and capturing of the head image may be various. The application is not limited.
- the camera image is collected by the camera, and the camera may be one or more, and the plurality of cameras are distributed in the virtual A fixed space of the projection scene, the plurality of cameras perform no-angle shooting on the fixed space, and when the user enters the fixed space, the plurality of cameras can simultaneously collect images of the user, and each camera is connected to the intelligent projection system, and each other It can work independently or in cooperation, or it can use 360-degree omnidirectional camera for all-round dead angle acquisition.
- the acquired image is preprocessed to facilitate the extraction of facial features, including ray compensation, gradation transformation, histogram equalization, normalization, geometric correction, filtering, and sharpening.
- the captured image is read, and the first frame information is read to determine whether the acquired image contains a desired header image.
- a face or an eye region is selected for the included head image, and the largest region is selected.
- S24 Acquire a key positioning point of the head according to the face or the eye.
- the key positioning points of the head may be the position of two corners of the mouth, the positions of the two eyebrows, the position of the two ears, the position of the two tibial protrusions, the position of the bridge of the nose, etc., or It is the observer's own obvious facial features.
- the human face does not face the camera and/or the projected object face, and the virtual projection object is also the same when the eye is squinted or turned to the image detection area. It can be deflected as the eye sweep changes, so that it is not limited to the human face, and the eye is also a key anchor point.
- the method of determining the viewing angle of the projected object relative to the observer is consistent with the face, that is, regardless of the position of the person.
- the method of this application applies to whether or not to move.
- the position of the user's human eye can be determined according to the focal length of the image capturing device, the position of the human eye in the image, the framing direction and the coordinates of the image capturing device, and then the pupil and radiance information of the human eye is extracted from the image, according to the person
- the pupil and radiance information of the eye reconstructs the human visual axis (ie, the direction of the line of sight), thereby implementing a non-contact free-space line-of-sight tracking method, or using a line-of-sight tracking method based on iris recognition.
- the position of the line of sight at the camera plane and the projection plane corresponds to the Face point of the face projection position in the embodiment of the present application (as shown in FIG. 5 and FIG. 6).
- the above process acquires the head key positioning point of the observer's initial position, and generates an initial tracking frame for the observer's head, continues to capture the observer's head image, and tracks the movement or rotation of the head feature point, repeating the above Steps: acquiring a key positioning point of the observer's target position, and generating a target tracking frame for the observer's head; calculating an offset di corresponding to the key positioning point or a target tracking frame center point relative to the initial tracking frame center point The offset do determines whether the head feature point moves or rotates according to the offset amount di or the offset amount do; if the head feature point does not move or rotate, the observation of the projected object relative to the observer is maintained The viewing angle remains the same. When the head feature points are not moved or rotated, the system does not perform subsequent calculation analysis and control operations, effectively improving the operating efficiency of the system.
- the movement or rotation of the tracking head feature points must be continuous and uninterrupted, ensuring that the virtual projection object is deflected when the observer's head feature points move or rotate, and observe at any time. None of them will feel the flickering of the projected object.
- the observer initial position and the observer target position are two adjacent positions of the head key positioning point in the detection area, and the collection time of the position change can be made as short as possible.
- FIG. 4 is a flowchart of another method for acquiring an anchor position of an observer's head according to an embodiment of the present application. Specifically, it may include image acquisition, image processing, and data analysis.
- capturing a face signal to acquire an image Reading first frame information of the image; detecting a face in the image, and filtering out a largest area face through image processing; acquiring a face initial key point ; Generate an initial tracking box to enter the face tracking mode.
- the process proceeds to the next step; if the first frame information acquisition fails, the process automatically jumps to the face tracking step; the number of consecutive undetected frames is counted, and if it is greater than 10, the tracking failure is determined. Or the target disappears and automatically returns to the step of detecting the face in the image.
- Detecting a face in the field of the initial tracking frame acquiring an image; determining whether the image in the target tracking frame overlaps with the image of the initial tracking frame; if overlapping, filtering out the maximum overlapping area, obtaining 5 face target key points, and calculating The offset di of the key positioning point; if not overlapping, the number of consecutive undetected frames is counted. If less than 10, the offset of the center point of the target tracking frame from the center point of the initial tracking frame is calculated; if di>5 (If you take 5 points, that is, d1>5&d2>5&d3>5&d4>5&d5>5), or do>10, it is judged that the face moves.
- the di>5 and the 5 and 10 in the do>10 are selected according to the general rule, and the value can be set according to the actual application requirements and the precision requirements.
- the observation angle of the projected object relative to the observer is kept unchanged; if the face or the eye moves or rotates, the projected object is according to the intelligent projection method provided by the embodiment of the present application.
- the viewing angle of the projected object relative to the observer is calculated to control the deflection of the projected object.
- FIG. 5 is a schematic diagram showing the relationship between the camera and the spatial position of the camera when the central optical axis of the camera is parallel to the ground.
- the coordinate relationship between the camera and the face at a certain distance is established on the basis of FIG. 2 to calculate the angle between the face and the camera in the X and Y directions, and the space coordinate position is converted to X.
- the plane coordinate model in the Y-axis direction here the position of the camera plane.
- the determining the position of the observer in the plane of the camera and the plane of the projection comprises:
- the angle a1 can be decomposed into an angle x_angle_c between the head and the camera in the X-axis direction, and an angle y_angle_c between the head and the camera in the Y-axis direction;
- the actual distance between the face and the camera (AO segment) is d
- the actual distance between each pixel is dpixel
- the unit is cm/pixel. It is assumed that the coordinates of point C in the image are (x c , y c ), The coordinates of point A are (x a , y a ), (x a -x c ), and (y a -y c ) are pixel distances, then the actual distance between A and B is:
- FIG. 6 is a mathematical geometric model diagram of a human face, a camera, and a projected object in the y direction when the height of the human body is fixed according to an embodiment of the present application.
- the embodiment of the present application is to fit the angle of the projected object and the face on the basis of FIG. 2, according to the angle between the face and the camera, the angle of the face and the projected object, and the positional relationship, and the distance between the face and the camera and the projected object.
- h is the height of the image
- y is the projection distance of the head in the Y direction of the image
- x_angle_c is the angle between the camera and the head in the X direction
- y_angle_c is the angle between the camera and the head in the Y direction
- x_angle_o is The angle between the head and the projected object in the X direction
- y_angle_o is the angle of the head and the projected object in the Y direction
- k 0 and k 1 are fixed coefficients.
- the angle a2 can be decomposed into an angle x_angle_o between the head and the projected object in the X-axis direction, and an angle y_angle_o between the head and the projected object in the Y-axis direction, the angle between the observer's head and the projected object is determined.
- the formula is as follows:
- the height of the human body is fixed on the basis of FIG. 2 to establish a geometric coordinate model to calculate the angle between the face of the human face and the projected object surface in the X and Y directions.
- the determining an angle a2 between the observer's head and the projected object further includes:
- the geometric coordinates of the height of the observer are fixed to determine the angle between the head and the projected object.
- the included angle a2 includes an angle x_angle_o between the head and the projected object in the X-axis direction, and an angle y_angle_o between the head and the projected object in the Y-axis direction, and the formula for determining the angle between the observer's head and the projected object as follows:
- y_angle is the angle at which the camera axis is tilted in the Y direction
- y_angle_c is the angle between the head and the camera in the direction of the axis Y
- y_angle_o is the angle formed by the head and the projected object in the direction of the axis Y
- H is the height of the observer
- L2 is The distance between the head and the projected object
- L1 is the distance between the camera and the projected object
- h1 is the height of the projected object
- h2 is the height of the camera
- x_angle_c is the angle between the head and the camera in the direction of the axis X
- x_angle_o is the head and projection The angle at which the object forms in the direction of the axis X.
- FIG. 7 is a mathematical geometric model diagram of a face, a camera, and a projected object in the x direction according to an embodiment of the present application.
- x_angle_c represents the angle between the face and the camera in the direction of the axis x
- x_angle_o represents the angle formed by the face and the projected object in the direction of the axis x
- the calculation formula is as follows:
- the angles of the angle between the face and the X and Y directions of the camera are x_angle_c and y_angle_c.
- the angles of the angle between the face and the projected object surface X and Y are x_angle_o and y_angle_o, and a1 and a2 are at X.
- the decomposition angles x_angle_c, y_angle_c, x_angle_o, y_angle_o in the Y direction are used to calculate the observation angle of the projection object with respect to the observer, and the deflection of the projection object by the face control is realized, so that the observer can view the projection object at the optimal observation angle every time.
- the method for determining the viewing angle of the projected object relative to the observer may also establish a rotation matrix by establishing a three-dimensional model of the face according to the 3D position distribution of the key point of the face and the distribution of the key points of the face detected in the 2D image.
- the equation of the offset matrix determines the position of the face in the world coordinates (x, y, z), and determines the positional relationship between the virtual camera and the projected object in unity3D according to (x, y, z).
- the projection angles of the plurality of projected objects displayed simultaneously can be accurately calculated, and the observer can clearly see the plurality of virtual projection objects displayed in different directions simultaneously. image.
- the method is simple in calculation, and does not need to be put into high-performance computer equipment, which is convenient for use by various user groups; the algorithm program is smooth and accurate, and can enable the observer to understand the display content of the virtual projection object in all aspects at different positions; the virtual projection of the stereoscopic display
- the object can be deflected as the position of the observer moves, and the deflection angle can be up to 90°, which alleviates the visual fatigue to a certain extent, and can also clearly and accurately view a plurality of stereoscopic projection objects.
- FIG. 8 is a schematic diagram of an intelligent projection system 800 according to an embodiment of the present application.
- the system is applied to the smart terminal, and includes: a key positioning point acquiring unit 810, a plane determining unit 820, an angle determining unit 830, and a viewing angle determining unit 840.
- the key positioning point obtaining unit 810 is configured to acquire an observer head key positioning point, where the head key positioning point is acquired from an observer image captured by a camera;
- the plane determining unit 820 is configured to determine, according to the head key positioning point, The position of the observer in the camera plane and the projection plane;
- the angle determining unit 830 is configured to determine an angle a1 between the observer's head and the camera according to the position of the observer in the camera plane;
- the angle of view determining unit 840 is configured to The angle a1 between the observer's head and the camera, and the position of the observer at the projection plane determine the angle a2 between the observer's head and the projected object, thereby determining the viewing angle of the projected object relative to the observer.
- the key positioning point acquiring unit 810 is specifically configured to: capture a head image in the camera detection area; read first frame information of the image; and detect a face or an eye in the image; According to the face or the eye, a key positioning point of the head is obtained.
- the device embodiment and the method embodiment are based on the same concept, and the content of the device embodiment may refer to the method embodiment, and details are not described herein.
- the 3D display system of the present application has a simple structure, and only needs to prepare a camera, 3D glasses, and a smart terminal device including software for controlling the operation of the system.
- the camera can be a general projection camera, and the system can simultaneously perform one or more images collected. deal with.
- the projected object is displayed on any displayable area, and the observer does not need to be fixed at one position at all, because the projected object can be deflected as the observer moves, which alleviates the visual fatigue to some extent.
- the observer's eyes are not limited to viewing only one stereoscopic display image, and the multiple projected object images in different directions can be clearly seen at the same time.
- the system can accurately calculate the simultaneous display by formulas (1)-(14).
- the angle of view of multiple projected objects is such that each image presents a clear viewing angle based on the position of the face or the action of the eye.
- the smart terminal may be an electronic device with a display screen, such as a smart phone, a computer, a personal digital assistant (PDA), a tablet computer, a smart watch, or an e-book.
- the smart terminal supports an open operating system platform, and the operating system can be a UNIX system, a Linux system, a Mac OS X system, a Windows system, an iOS system, an Android system, a WP system, a Chrome OS system, and the like.
- FIG. 9 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present application.
- the smart terminal 900 includes one or more processors 901 and a memory 902.
- one processor 901 is taken as an example in FIG.
- the processor 901 and the memory 902 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
- the memory 902 is used as a non-volatile computer readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions corresponding to the smart projection method in the embodiments of the present application.
- / Module for example, the key anchor point acquisition unit 810, the plane determination unit 820, the angle determination unit 830, and the angle of view determination unit 840 shown in FIG. 8).
- the processor 901 executes various functional applications and data processing of the intelligent projection system by executing non-volatile software programs, instructions and modules stored in the memory 902, that is, the method for projecting the above-described method embodiments and the system embodiment described above The function of each module and unit.
- the memory 902 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the intelligent projection system, and the like.
- memory 902 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
- memory 902 can optionally include memory remotely located relative to processor 901, which can be coupled to processor 901 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
- the program instructions/modules are stored in the memory 902, and when executed by the one or more processors 901, perform an intelligent projection method in any of the above method embodiments, for example, performing the above described FIG. Method steps S11 to S14; the functions of the various modules or units described in FIG. 8 can also be implemented.
- an embodiment of the present application further provides a non-transitory computer readable storage medium.
- the non-transitory computer readable storage medium stores electronic device executable instructions for causing an electronic device to perform the intelligent projection method of the above-described embodiments to achieve a key positioning point by acquiring an observer's head.
- the head key positioning point is obtained from the observer image captured by the camera; determining, according to the head key positioning point, the position of the observer in the camera plane and the projection plane; according to the position of the observer in the camera plane, Determining an angle a1 between the observer's head and the camera; determining an angle a2 between the observer's head and the projected object according to the angle a1 between the observer's head and the camera, and the position of the observer at the projection plane, thereby determining The viewing angle of the projected object relative to the observer.
- the method is simple in calculation, and does not need to be put into high-performance computer equipment, which is convenient for use by various user groups; the algorithm program is smooth and accurate, and can enable the observer to understand the display content of the virtual projection object in all aspects at different positions; the virtual projection of the stereoscopic display
- the object can be deflected as the position of the observer moves, and the deflection angle can be up to 90°, which alleviates the visual fatigue to a certain extent, and can also clearly and accurately view a plurality of stereoscopic projection objects.
- system or device embodiments described above are merely illustrative, wherein the unit modules described as separate components may or may not be physically separate, and the components displayed as module units may or may not be physical units. , can be located in one place, or can be distributed to multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请实施例公开一种智能投影方法及其系统,应用于智能终端,当观察者头部特征点移动或转动时虚拟投影物体随之偏转,所述方法包括:获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。通过简单方便的计算,无需投入高性能的计算机设备,便使观察者在不同位置均可观看随投影视角转动的虚拟投影物体,提升了用户体验。
Description
本申请涉及智能投影领域,特别是涉及一种智能投影方法、系统及智能终端。
三维立体(3D Stereo)显示技术为目前火热技术之一,通过左右眼信号分离,在显示平台上实现立体图像显示。立体显示是VR虚拟现实的一个实现沉浸交互的方式之一,3D立体显示可以把投影物体的纵深、层次、位置等全部展现,观察者更直观的了解投影物体的现实分布情况,从而更全面了解投影物体或显示内容。然而,观察者并不是静止不动的,当观察者移动位置时就必须要求虚拟投影物体也随之偏转从而便于观察者在其他角度的位置也能清楚观看到立体图像内容,观看更加舒适。
中国专利CN104155840A公开了一种基于高速投影机的360°全视差三维显示装置,根据待显示的3D场景信息和各观察者的位置信息,向所述的高速投影机输送图像,从而使观察者在不同的位置均能看到准确的图像信息,但是,为了保证观察者始终能看到正确的图像,图像必须根据观察者位置实时的绘制,3D场景越复杂,所需的计算量越大。同时,为了使观察者能不感觉到闪烁感,每名观察者每只眼睛看到的图像需要达到60Hz的刷新速度,即提供给每名观察者的图像的帧频需要为120Hz,那么如果有N个观察者,则需要输出的帧频为N*120Hz。同时,位置追踪设备追踪观察者位置时,也需要较大的运算量,因此,需要高性能的计算机或者图形工作站才能满足这里的要求。
因此,基于现有技术计算量大、对计算机性能要求偏高的缺陷,本申请提供了一种通过人脸或眼部位置坐标便可简单方便地全方位观看投影物体且观看投影物体图像清晰的一种智能投影方法及其系统,通过采用本申请的投影方法便能使观察者在不同位置时,投影物体随着观察者面对投影屏幕的方向发生偏转,从而达到使观察者不需要局限于一个 位置,提升用户体验。
【发明内容】
本申请实施例主要解决的技术问题是提供一种智能投影方法、系统及智能终端,能够通过追踪用户人脸或眼部,提取头部特征点,人脸或眼部转动或者移动,则虚拟投影物体偏转,通过建立简洁清晰的数学模型得出投影物体相对于观察者的观察视角。
为解决上述技术问题,本申请实施例采用的一个技术方案是:提供一种智能投影方法,应用于智能终端,当观察者头部特征点移动或转动时虚拟投影物体随之偏转,所述方法包括:
获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;
根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;
根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;
根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。
进一步的,所述获取观察者头部关键定位点,包括:
在摄像头检测区域内,捕捉头部图像;
读取所述图像的第一帧信息;
检测所述图像内的人脸或眼部;
根据所述人脸或眼部,获取头部关键定位点。
进一步的,所述方法还包括:
跟踪头部特征点的移动或转动;
计算对应关键定位点的偏移量di或目标跟踪框中心点相对初始跟踪框中心点的偏移量do,根据所述偏移量di或所述偏移量do判断头部特征点是否移动或转动。
进一步的,所述确定所述观察者在摄像头平面及投影平面的位置,包括:
通过摄像头与所述头部在特定距离下的空间位置坐标确定所述观察者在摄像头平面的位置;
所述夹角a1包括头部与摄像头在X轴方向的夹角x_angle_c,以及头部与摄像头在Y轴方向的夹角y_angle_c;
所述确定观察者头部与摄像头的夹角的公式如下:
dx=(x
a-x
c)*dpixel,dy=(y
a-y
c)*dpixel,
x_angle_c=arctan(dx/d),y_angle_c=arctan(dy/d);
其中,x_angle_c为头部与摄像头在X轴方向的夹角,y_angle_c为头部与摄像头在Y轴方向的夹角,C点表示头部的位置,O点表示摄像头的位置,d为头部与摄像头的特定距离AO,dpixel为每个像素间的实际距离,单位是cm/像素,(x
c,y
c)为C点在图像中的坐标,(x
a,y
a)为A点在图像中的坐标。
进一步的,所述观察者头部与投影物体的夹角a2包括头部与投影物体在X轴方向的夹角x_angle_o,以及头部与投影物体在Y轴方向的夹角y_angle_o,所述确定观察者头部与投影物体的夹角的公式如下:
x_angle_o=ratio*x_angle_c,y_angle_o=ratio*y_angle_c;
其中,h为图像的高度,y为头部在所述图像Y方向的投影距离,x_angle_c为摄像头与头部在X方向的夹角,y_angle_c为摄像头与头部在Y方向的夹角,x_angle_o为头部与投影物体在X方向的角度,y_angle_o为头部与投影的物体在Y方向的角度,k
0、k
1为固定系数。
进一步的,所述确定观察者头部与投影物体的夹角a2的方法还包括:
通过固定观察者人体高度建立几何坐标确定头部与投影物体的夹角,
所述夹角a2包括头部与投影物体在X轴方向的夹角x_angle_o,以及 头部与投影物体在Y轴方向的夹角y_angle_o,所述确定观察者头部与投影物体的夹角的公式如下:
其中,y_angle为Y方向摄像头轴线倾斜的角度,y_angle_c为头部与摄像头在轴线Y方向的夹角,y_angle_o为头部与投影物体在轴线Y方向形成的角度,H为观察者人体身高,L2为头部与投影物体的距离,L1为摄像头与投影物体的距离,h1为投影物体的高度,h2为摄像头的高度, x_angle_c为头部与摄像头在轴线X方向的夹角,x_angle_o为头部与投影物体在轴线X方向形成的角度。
为解决上述技术问题,本申请实施例采用的另一个技术方案是:提供一种智能投影系统,应用于智能终端,当观察者头部特征点移动或转动时虚拟投影物体随之偏转,所述系统包括:
关键定位点获取单元,用于获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;
平面确定单元,用于根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;
夹角确定单元,用于根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;
视角确定单元,用于根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。
进一步的,所述关键定位点获取单元具体用于:
在摄像头检测区域内,捕捉头部图像;
读取所述图像的第一帧信息;
检测所述图像内的人脸或眼部;
根据所述人脸或眼部,获取头部关键定位点。
为解决上述技术问题,本申请实施例采用的另一个技术方案是:提供一种智能终端,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述任一项的方法。
为解决上述技术问题,本申请实施例采用的另一个技术方案是:提供一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被智能终端执行时,使所述智能终端执行上述任一项的方法。
本申请实施例的有益效果是:获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。此方法计算方式简单,无需投入高性能的计算机设备,方便各个用户群体使用;算法程序流畅、准确性高,可以使观察者在不同位置全方面了解虚拟投影物体的显示内容;立体显示的虚拟投影物体可以随着观察者位置移动而偏转,偏转角度最大可达90°,在一定程度上缓解了视觉疲劳性,利于观察者每次都在最佳观察视角观看投影物体,同时还可以清晰准确观看多个立体投影物体。
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的一种智能投影方法的流程图;
图2是本申请实施例提供的一种观察者在摄像头平面及投影平面的位置关系示意图;
图3是本申请实施例提供的一种获取观察者头部关键定位点的流程图;
图4是本申请实施例提供的另一种获取观察者头部关键定位点的流程图;
图5是本申请实施例提供的一种当摄像头中心光轴平行于地面的情况时,摄像头与人脸空间位置关系示意图;
图6是本申请实施例提供的一种固定人体的高度时人脸与摄像头、投影物体在y方向的数学几何模型图;
图7是本申请实施例提供的一种人脸与摄像头、投影物体在x方向的数学几何模型图;
图8是本申请实施例提供的一种智能投影系统的示意图;
图9是本申请实施例提供的一种智能终端的结构示意图。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
此外,下面所描述的本申请各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
三维立体(3D Stereo)显示技术可以分为眼镜式和裸眼式,在本申请实施例中以眼镜式为例,观察者通过佩戴3D眼镜,可清楚看到投影仪投影出来的投影物体图像,不论虚拟投影物体是静止还是运动状态,均可随人脸转动随之改变投影物体相对于观察者的观察视角,即投影视角,且不会让观察者感觉到闪烁感。本申请实施例的附图以人脸移动为例进行阐述,值得注意的是,本申请并不限定对用户进行人脸或眼部跟踪识别,根据人脸、摄像头以及投影物体的位置从而确定投影物体相对于观察者的观察视角所用的具体方法,凡是利用图像采集进行用户图像采集,根据人脸、摄像头以及投影物体两两之间的夹角关系进而实现投影物体随人脸移动而偏转的方法均可。
请参见图1,图1是本申请实施例提供的一种智能投影方法的流程图,所述方法包括:
S11:获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;
人脸特征与人体的其他生物特征(如指纹、虹膜等)一样与生俱来,它的唯一性和不易被复制的良好特性为身份鉴别提供了必要的前提。可以理解的是,观察者头部关键定位点根据人脸的图像从头部特征点选取,如视觉特征、像素统计特征、人脸图像变化系数特征、人脸图像代数特征、直方图特征、颜色特征、模板特征及结构特征等选取,先在图像中标定出人脸的位置及大小,根据预设的规则和算法选取所述观察者头部关键定位点,一般选取多个定位点用于判断人脸位置是否变化,提高准确性以及可行性。
S12:根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;
结合图2,更加直观地了解到,在人脸、摄像机以及投影物体三者共同组成了一个空间,把观察者人脸投影到所述摄像头平面及所述投影平面,通过平面坐标下的各个对象的位置和夹角关系来计算投影物体相对于观察者的观察视角。
S13:根据所述观察者在摄像头平面的位置,确定观察者头部与摄 像头的夹角a1;
S14:根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。
当确定了所述观察者头部与摄像头的夹角a1和所述观察者头部与投影物体的夹角a2,那么投影物体相对于观察者的观察视角是唯一确定的,虚拟投影物体随着人脸移动而偏转,投影物体依据计算所得的投影物体相对于观察者的观察视角偏转到目标位置,一般最大可偏转90°左右。
请参见图3,图3是本申请实施例提供的一种获取观察者头部关键定位点的流程图。所述获取观察者头部关键定位点,包括:
S21:在摄像头检测区域内,捕捉头部图像;
当用户进入检测区域内,感知用户进入的信号,为克服光线不足的问题,可以采用基于主动近红外图像的多光源人脸识别技术。跟踪捕捉头部图像的实现方式可以为多种,本申请不做限定,在本申请的实施例中,利用摄像头采集所述头部图像,摄像头可以为一个或者多个,多个摄像头分布于虚拟投影场景的固定空间,多个摄像头对该固定空间进行无死角拍摄,当用户进入该固定空间,多个摄像头可以同时采集用户的图像,且每个摄像头均与所述智能投影系统相连接,彼此之间可以独立或者协同工作,也可以采用360度全方位摄像机进行全方位无死角采集。
S22:读取所述图像的第一帧信息;
对采集到的图像进行预处理,便于人脸特征的提取,所述预处理包括光线补偿、灰度变换、直方图均衡化、归一化、几何校正、滤波以及锐化等。对采集的图像进行读取,读取第一帧信息判断所述采集的图像是否包含需要的头部图像。
S23:检测所述图像内的人脸或眼部;
具体的,针对所述包含需要的头部图像选取人脸或眼部区域,并筛选出最大区域。
S24:根据所述人脸或眼部,获取头部关键定位点。
就对人脸而言,所述头部关键定位点可以是两个嘴角的位置、两个眉峰的位置、两个耳朵的位置、两个颧骨凸起的位置以及鼻梁的位置等等,或者是观察者自身的明显面部特征,对眼部而言,假设人的脸部并未正面面对摄像头和/或投影物体面,而眼睛斜视或转动扫射到图像检测区域时,虚拟投影物体同样也可以随着眼部扫射变化发生偏转,这样就不会仅仅局限于人的脸部,眼部也是关键定位点,确定投影物体相对于观察者的观察视角方法与脸部一致,即无论人的位置是否移动,均适用本申请的方法。可以根据图像采集装置的焦距、人眼在图像中的位置、取景方向和图像采集装置的坐标,确定用户人眼所在的位置,然后再从图像中提取人眼的瞳孔和耀点信息,根据人眼的瞳孔和耀点信息重构出人的视线轴(即为视线方向),从而实现非接触式的自由空间视线跟踪方法,或者,采用基于虹膜识别的视线跟踪定位方法实现。视线在所述摄像头平面及所述投影平面的落点位置相当于本申请实施例中人脸投影位置的Face点(如附图5和附图6所示)。
上述过程获取了观察者初始位置的头部关键定位点,并生成了一个针对观察者头部的初始跟踪框,继续捕捉观察者的头部图像,跟踪头部特征点的移动或转动,重复上述步骤,获取观察者目标位置的头部关键定位点,同时生成一个针对观察者头部的目标跟踪框;计算对应关键定位点的偏移量di或目标跟踪框中心点相对初始跟踪框中心点的偏移量do,根据所述偏移量di或所述偏移量do判断头部特征点是否移动或转动;如果头部特征点未发生移动或转动,则保持投影物体相对于观察者的观察视角不变。在头部特征点未发生移动或转动时系统不进行后续的计算分析以及控制操作,有效地提升系统的运行效率。
需要说明的是,所述跟踪头部特征点的移动或转动必须是连续的、不间断的,确保当观察者头部特征点的移动或转动时虚拟投影物体随之偏转,在任意时刻,观察者均不会感到投影物体有闪烁感。所述观察者初始位置和所述观察者目标位置是头部关键定位点在所述检测区域内的相邻两个位置,位置变化的采集时间可以做到尽量短。
请参见图4,图4是本申请实施例提供的另一种获取观察者头部关 键定位点的流程图。具体可以包括图像采集、图像处理以及数据分析。
在检测区域内,捕捉人脸信号获取图像;读取所述图像的第一帧信息;检测所述图像内的人脸,通过图像处理,筛选出最大区域人脸;获取人脸初始关键定位点;生成初始跟踪框,进入人脸跟踪模式。
若第一帧信息获取成功,则进入下一步骤;若第一帧信息获取失败,则自动跳转至人脸跟踪步骤;统计连续未检测到的帧数,若大于10,则判断为跟踪失败或目标消失,自动回到所述图像内检测人脸的步骤。
在初始跟踪框的领域内检测人脸,获取图像;判断目标跟踪框内的图像与初始跟踪框的图像是否重叠,若重叠,筛选出最大重叠区域,获取5个人脸目标关键定位点,计算出对应关键定位点的偏移量di;若不重叠,统计连续未检测到的帧数,若小于10,则计算目标跟踪框中心点相对初始跟踪框中心点的偏移量do;若di>5(如果取5个点,即d1>5&d2>5&d3>5&d4>5&d5>5),或者do>10,判断为人脸移动。
可以理解的是,此处的di>5以及do>10中的5和10根据一般的规律选取,取值可以依据实际应用需求和精度要求做相应的设置。
如果人脸或眼部未发生移动或转动,则保持投影物体相对于观察者的观察视角不变;如果人脸或眼部发生移动或转动,则投影物体根据本申请实施例提供的智能投影方法计算投影物体相对于观察者的观察视角从而控制投影物体的偏转。
请参见图5,图5是本申请实施例提供的一种当摄像头中心光轴平行于地面的情况时,摄像头与人脸空间位置关系示意图。本申请实施例是在图2的基础上建立摄像头与人脸在某个距离下的空间位置坐标关系来计算人脸与摄像头在X,Y轴方向的夹角,由空间坐标位置转化为在X、Y轴方向上的平面坐标模型,此处即摄像头平面的位置。
所述确定所述观察者在摄像头平面及投影平面的位置,包括:
通过摄像头与所述头部在特定距离下的空间位置坐标确定所述观察者在摄像头平面的位置;
所述夹角a1可分解成头部与摄像头在X轴方向的夹角x_angle_c,以及头部与摄像头在Y轴方向的夹角y_angle_c;
所述确定观察者头部与摄像头的夹角的公式如下:
在人脸与摄像头的距离(AO段)为d的情况下,每个像素间的实际距离为dpixel,单位是cm/像素,假设C点在图像中的坐标为(x
c,y
c),A点的坐标为(x
a,y
a),(x
a-x
c)、(y
a-y
c)均为像素距离,则A与B之间的实际距离为:
dx=(x
a-x
c)*dpixel (1)
A与D之间的实际距离为:
dy=(y
a-y
c)*dpixel (2)
则根据图5所示的角度模型,得到:
x_angle_c=arctan(dx/d) (3)
y_angle_c=arctan(dy/d) (4)
请参见图6,图6是本申请实施例提供的一种固定人体的高度时人脸与摄像头、投影物体在y方向的数学几何模型图。本申请实施例是在图2的基础上拟合投影物体与人脸的角度,根据人脸与摄像头的角度、人脸与投影物体角度以及位置关系,随着人脸分别与摄像头距离以及投影物体距离的增加,所述头部与投影物体在X方向的角度与所述摄像头与头部在X方向的夹角的关系,所述头部与投影物体在Y方向的角度与所述摄像头与头部在Y方向的夹角的关系均可以用指数函数exp()表示。
其中,h为图像的高度,y为头部在所述图像Y方向的投影距离,x_angle_c为摄像头与头部在X方向的夹角,y_angle_c为摄像头与头部在Y方向的夹角,x_angle_o为头部与投影物体在X方向的角度,y_angle_o为头部与投影的物体在Y方向的角度,k
0、k
1为固定系数。
所述夹角a2可分解为头部与投影物体在X轴方向的夹角x_angle_o,以及头部与投影物体在Y轴方向的夹角y_angle_o,所述确定观察者头部与投影物体的夹角的公式如下:
x_angle_o=ratio*x_angle_c (6)
y_angle_o=ratio*y_angle_c (7)
本申请实施例是在图2的基础上固定人体的高度建立几何坐标模型来计算人脸与投影物体面在X,Y轴方向的夹角。所述确定观察者头部与投影物体的夹角a2,还包括:
通过固定观察者人体高度建立几何坐标确定头部与投影物体的夹角,
所述夹角a2包括头部与投影物体在X轴方向的夹角x_angle_o,以及头部与投影物体在Y轴方向的夹角y_angle_o,所述确定观察者头部与投影物体的夹角的公式如下:
由(8)、(9)得:
其中,y_angle为Y方向摄像头轴线倾斜的角度,y_angle_c为头部与摄像头在轴线Y方向的夹角,y_angle_o为头部与投影物体在轴线Y方向形成的角度,H为观察者人体身高,L2为头部与投影物体的距离,L1为摄像头与投影物体的距离,h1为投影物体的高度,h2为摄像头的高度,x_angle_c为头部与摄像头在轴线X方向的夹角,x_angle_o为头部与投影物体在轴线X方向形成的角度。
请参见图7,图7是本申请实施例提供的一种人脸与摄像头、投影物体在x方向的数学几何模型图。其中,x_angle_c表示人脸与摄像头在轴线x方向的夹角,x_angle_o表示人脸与投影物体在轴线x方向形成的角度,计算公式如下:
人脸与摄像头X、Y方向夹角a1的分解角度为x_angle_c、y_angle_c; 同理,人脸与投影物体面X,Y方向夹角a2的分解角度为x_angle_o、y_angle_o,通过a1和a2在X,Y方向的分解角度x_angle_c、y_angle_c,x_angle_o、y_angle_o来计算得出投影物体相对于观察者的观察视角,实现人脸控制投影物体的偏转,利于观察者每次都在最佳观察视角观看投影物体。
所述确定投影物体相对于观察者的观察视角的方法还可以通过建立人脸三维模型,根据人脸关键点的3D位置分布与2D图像下检测到人脸的关键点的分布,建立关于旋转矩阵,偏移矩阵的方程,确定人脸在世界坐标的位置(x,y,z),根据(x,y,z)确定unity3D中虚拟摄像头与投影物体的位置关系。
可以理解的是,根据本申请实施例提供的智能投影的方法,能够准确计算出同时显示的多个投影物体的投影视角,观察者可清晰看到同时显示的处于不同方向的多个虚拟投影物体图像。
此方法计算方式简单,无需投入高性能的计算机设备,方便各个用户群体使用;算法程序流畅、准确性高,可以使观察者在不同位置全方面了解虚拟投影物体的显示内容;立体显示的虚拟投影物体可以随着观察者位置移动而偏转,偏转角度最大可达90°,在一定程度上缓解了视觉疲劳性,同时还可以清晰准确观看多个立体投影物体。
请参见图8,图8是本申请实施例提供的一种智能投影系统800的示意图。所述系统应用于智能终端,包括:关键定位点获取单元810、平面确定单元820、夹角确定单元830以及视角确定单元840。
关键定位点获取单元810用于获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;平面确定单元820用于根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;夹角确定单元830用于根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;视角确定单元840用于根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。
可选的,所述关键定位点获取单元810具体用于:在摄像头检测区域内,捕捉头部图像;读取所述图像的第一帧信息;检测所述图像内的人脸或眼部;根据所述人脸或眼部,获取头部关键定位点。
由于装置实施例和方法实施例是基于同一构思,在内容不互相冲突的前提下,装置实施例的内容可以引用方法实施例的,在此不赘述。
本申请的3D显示系统结构简单,只需准备摄像头、3D眼镜以及包含控制该系统运行的软件的智能终端设备,摄像头可以为普通投影摄像头,所述系统可以同时对采集的一个或多个图像进行处理。投影物体显示在任何可以显示的区域上,观察者无需一直固定在一个位置,因投影物体可以随着观察者移动而偏转,在一定程度上缓解了视觉疲劳性。同时,观察者眼睛不局限于只观看一副立体显示图像,可以清晰看到同时显示的处于不同方向的多个投影物体图像,系统可以通过公式(1)-(14)准确计算出同时显示的多个投影物体的视角,使得每个图像都会根据人脸位置或眼睛动作呈现出清晰的观看视角。
在本申请实施例中,智能终端可以是智能手机、计算机、掌上电脑(Personal Digital Assistant,PDA)、平板电脑、智能手表、电子书等设有显示屏幕的电子设备。其中,智能终端支持开放性的操作系统平台,所述操作系统可以为UNIX系统、Linux系统、Mac OS X系统、Windows系统、iOS系统、Android系统、WP系统、Chrome OS系统等等。
请参见图9,图9是本申请实施例提供的一种智能终端的结构示意图。如图9所示,该智能终端900包括一个或多个处理器901以及存储器902。其中,图9中以一个处理器901为例。
处理器901和存储器902可以通过总线或者其他方式连接,图9中以通过总线连接为例。
存储器902作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的智能投影方法对应的程序指令/模块(例如,附图8所示的关键定位点获取单元810、平面确定单元820、夹角确定单元830以及视角 确定单元840)。处理器901通过运行存储在存储器902中的非易失性软件程序、指令以及模块,从而执行智能投影系统的各种功能应用以及数据处理,即实现上述方法实施例投影系统方法以及上述系统实施例的各个模块和单元的功能。
存储器902可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据智能投影系统的使用所创建的数据等。此外,存储器902可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器902可选包括相对于处理器901远程设置的存储器,这些远程存储器可以通过网络连接至处理器901。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述程序指令/模块存储在所述存储器902中,当被所述一个或者多个处理器901执行时,执行上述任意方法实施例中的智能投影方法,例如,执行以上描述的图1中的方法步骤S11至步骤S14;也可实现附图8所述的各个模块或单元的功能。
作为本申请实施例的另一方面,本申请实施例还提供一种非易失性计算机可读存储介质。非易失性计算机可读存储介质存储有电子设备可执行指令,所述计算机可执行指令用于使电子设备执行上述实施例的智能投影方法,以达到通过获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。此方法计算方式简单,无需投入高性能的计算机设备,方便各个用户群体使用;算法程序流畅、准确性高,可以使观察者在不同位置全方面了解虚拟投影物体的显示内容;立体显示的虚拟投影物体可以随着观察者位置移动而偏转,偏转角度最大可达90°,在一定程度上缓解了视觉疲劳 性,同时还可以清晰准确观看多个立体投影物体。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
以上所描述的系统或设备实施例仅仅是示意性的,其中所述作为分离部件说明的单元模块可以是或者也可以不是物理上分开的,作为模块单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络模块单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用至少一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。
Claims (10)
- 一种智能投影方法,应用于智能终端,当观察者头部特征点移动或转动时虚拟投影物体随之偏转,其特征在于,所述方法包括:获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。
- 根据权利要求1所述的方法,其特征在于,所述获取观察者头部关键定位点,包括:在摄像头检测区域内,捕捉头部图像;读取所述图像的第一帧信息;检测所述图像内的人脸或眼部;根据所述人脸或眼部,获取头部关键定位点。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:跟踪头部特征点的移动或转动;计算对应关键定位点的偏移量di或目标跟踪框中心点相对初始跟踪框中心点的偏移量do,根据所述偏移量di或所述偏移量do判断头部特征点是否移动或转动。
- 根据权利要求1-3任一项所述的方法,其特征在于,所述确定所述观察者在摄像头平面及投影平面的位置,包括:通过摄像头与所述头部在特定距离下的空间位置坐标确定所述观察者在摄像头平面的位置;所述夹角a1包括头部与投影物体在X轴方向的夹角x_angle_c,以及头部与投影物体在Y轴方向的夹角y_angle_c;所述确定观察者头部与摄像头的夹角的公式如下:dx=(x a-x c)*dpixel,dy=(y a-y c)*dpixel,x_angle_c=arctan(dx/d),y_angle_c=arctan(dy/d);其中,x_angle_c为头部与摄像头在X轴方向的夹角,y_angle_c为头部与摄像头在Y轴方向的夹角,C点表示头部的位置,O点表示摄像头的位置,d为头部与摄像头的特定距离AO,dpixel为每个像素间的实际距离,单位是cm/像素,(x c,y c)为C点在图像中的坐标,(x a,y a)为A点在图像中的坐标。
- 根据权利要求4所述的方法,其特征在于:所述观察者头部与投影物体的夹角a2包括头部与投影物体在X轴方向的夹角x_angle_o,以及头部与投影物体在Y轴方向的夹角y_angle_o,所述确定观察者头部与投影物体的夹角的公式如下:x_angle_o=ratio*x_angle_c,y_angle_o=ratio*y_angle_c;其中,h为图像的高度,y为头部在所述图像Y方向的投影距离,x_angle_c为摄像头与头部在X方向的夹角,y_angle_c为摄像头与头部在Y方向的夹角,x_angle_o为头部与投影物体在X方向的角度,y_angle_o为头部与投影的物体在Y方向的角度,k 0、k 1为固定系数。
- 根据权利要求4所述的方法,其特征在于,所述确定观察者头 部与投影物体的夹角a2的方法还包括:通过固定观察者人体高度建立几何坐标确定头部与投影物体的夹角,所述夹角a2包括头部与投影物体在X轴方向的夹角x_angle_o,以及头部与投影物体在Y轴方向的夹角y_angle_o,所述确定观察者头部与投影物体的夹角的公式如下:其中,y_angle为Y方向摄像头轴线倾斜的角度,y_angle_c为头部与摄像头在轴线Y方向的夹角,y_angle_o为头部与投影物体在轴线Y方向形成的角度,H为观察者人体身高,L2为头部与投影物体的距离,L1为摄像头与投影物体的距离,h1为投影物体的高度,h2为摄像头的高度,x_angle_c为头部与摄像头在轴线X方向的夹角,x_angle_o为头部与投影物体在轴线X方向形成的角度。
- 一种智能投影系统,应用于智能终端,当观察者头部特征点移动或转动时虚拟投影物体随之偏转,其特征在于,所述系统包括:关键定位点获取单元,用于获取观察者头部关键定位点,所述头部关键定位点从摄像头拍摄的观察者图像中获取;平面确定单元,用于根据所述头部关键定位点,确定所述观察者在摄像头平面及投影平面的位置;夹角确定单元,用于根据所述观察者在摄像头平面的位置,确定观察者头部与摄像头的夹角a1;视角确定单元,用于根据所述观察者头部与摄像头的夹角a1,以及观察者在投影平面的位置,确定观察者头部与投影物体的夹角a2,从而确定投影物体相对于观察者的观察视角。
- 根据权利要求7所述的系统,其特征在于,所述关键定位点获取单元具体用于:在摄像头检测区域内,捕捉头部图像;读取所述图像的第一帧信息;检测所述图像内的人脸或眼部;根据所述人脸或眼部,获取头部关键定位点。
- 一种智能终端,其特征在于包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6任一项所述的方法。
- 一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被智能终端执行时,使所述智能终端执行权利要求1-6任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710881640.1A CN107656619A (zh) | 2017-09-26 | 2017-09-26 | 一种智能投影方法、系统及智能终端 |
CN201710881640.1 | 2017-09-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019062056A1 true WO2019062056A1 (zh) | 2019-04-04 |
Family
ID=61131266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/081147 WO2019062056A1 (zh) | 2017-09-26 | 2018-03-29 | 一种智能投影方法、系统及智能终端 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107656619A (zh) |
WO (1) | WO2019062056A1 (zh) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107656619A (zh) * | 2017-09-26 | 2018-02-02 | 广景视睿科技(深圳)有限公司 | 一种智能投影方法、系统及智能终端 |
CN109271028A (zh) * | 2018-09-18 | 2019-01-25 | 北京猎户星空科技有限公司 | 智能设备的控制方法、装置、设备和存储介质 |
WO2020056689A1 (zh) * | 2018-09-20 | 2020-03-26 | 太平洋未来科技(深圳)有限公司 | 一种ar成像方法、装置及电子设备 |
CN109246414B (zh) * | 2018-09-27 | 2020-04-28 | 青岛理工大学 | 一种投影式增强现实图像生成方法及系统 |
CN110458617B (zh) * | 2019-08-07 | 2022-03-18 | 卓尔智联(武汉)研究院有限公司 | 广告投放方法、计算机装置及可读存储介质 |
CN110633664A (zh) * | 2019-09-05 | 2019-12-31 | 北京大蛋科技有限公司 | 基于人脸识别技术追踪用户的注意力方法和装置 |
CN110940029A (zh) * | 2019-10-28 | 2020-03-31 | 珠海格力电器股份有限公司 | 一种厨房空调投影装置及其控制方法 |
CN111031298B (zh) | 2019-11-12 | 2021-12-10 | 广景视睿科技(深圳)有限公司 | 控制投影模块投影的方法、装置和投影系统 |
CN111016785A (zh) * | 2019-11-26 | 2020-04-17 | 惠州市德赛西威智能交通技术研究院有限公司 | 一种基于人眼位置的平视显示系统调节方法 |
CN112650461B (zh) * | 2020-12-15 | 2021-07-13 | 广州舒勇五金制品有限公司 | 一种基于相对位置的展示系统 |
CN112672139A (zh) * | 2021-03-16 | 2021-04-16 | 深圳市火乐科技发展有限公司 | 投影显示方法、装置及计算机可读存储介质 |
CN114489326B (zh) * | 2021-12-30 | 2023-12-15 | 南京七奇智能科技有限公司 | 面向人群的虚拟人交互注意力驱动的姿态控制装置及方法 |
CN117348728A (zh) * | 2023-10-08 | 2024-01-05 | 南京市草本视觉科技有限公司 | 一种文创产品vr虚拟展示方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950550A (zh) * | 2010-09-28 | 2011-01-19 | 冠捷显示科技(厦门)有限公司 | 基于观看者视角显示不同角度画面的显示装置 |
CN103955279A (zh) * | 2014-05-19 | 2014-07-30 | 腾讯科技(深圳)有限公司 | 一种视角反馈方法及终端 |
CN107003744A (zh) * | 2016-12-01 | 2017-08-01 | 深圳前海达闼云端智能科技有限公司 | 视点确定方法、装置、电子设备和计算机程序产品 |
CN107656619A (zh) * | 2017-09-26 | 2018-02-02 | 广景视睿科技(深圳)有限公司 | 一种智能投影方法、系统及智能终端 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102307288A (zh) * | 2011-07-27 | 2012-01-04 | 中国计量学院 | 基于人脸识别的随第一人称视线移动的投影系统 |
CN103019507B (zh) * | 2012-11-16 | 2015-03-25 | 福州瑞芯微电子有限公司 | 一种基于人脸跟踪改变视点角度显示三维图形的方法 |
CN106200991B (zh) * | 2016-09-18 | 2020-11-24 | 山东兴创信息科技有限公司 | 一种调整角度方法、装置和一种移动终端 |
-
2017
- 2017-09-26 CN CN201710881640.1A patent/CN107656619A/zh active Pending
-
2018
- 2018-03-29 WO PCT/CN2018/081147 patent/WO2019062056A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950550A (zh) * | 2010-09-28 | 2011-01-19 | 冠捷显示科技(厦门)有限公司 | 基于观看者视角显示不同角度画面的显示装置 |
CN103955279A (zh) * | 2014-05-19 | 2014-07-30 | 腾讯科技(深圳)有限公司 | 一种视角反馈方法及终端 |
CN107003744A (zh) * | 2016-12-01 | 2017-08-01 | 深圳前海达闼云端智能科技有限公司 | 视点确定方法、装置、电子设备和计算机程序产品 |
CN107656619A (zh) * | 2017-09-26 | 2018-02-02 | 广景视睿科技(深圳)有限公司 | 一种智能投影方法、系统及智能终端 |
Also Published As
Publication number | Publication date |
---|---|
CN107656619A (zh) | 2018-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019062056A1 (zh) | 一种智能投影方法、系统及智能终端 | |
US12079382B2 (en) | Methods and apparatuses for determining and/or evaluating localizing maps of image display devices | |
US10269177B2 (en) | Headset removal in virtual, augmented, and mixed reality using an eye gaze database | |
CN109960401B (zh) | 一种基于人脸追踪的动向投影方法、装置及其系统 | |
CN104317391B (zh) | 一种基于立体视觉的三维手掌姿态识别交互方法和系统 | |
US9373156B2 (en) | Method for controlling rotation of screen picture of terminal, and terminal | |
KR101874494B1 (ko) | 특징점의 삼차원 위치 계산 장치 및 방법 | |
US11849102B2 (en) | System and method for processing three dimensional images | |
US8571258B2 (en) | Method of tracking the position of the head in real time in a video image stream | |
US9691152B1 (en) | Minimizing variations in camera height to estimate distance to objects | |
CN104881114B (zh) | 一种基于3d眼镜试戴的角度转动实时匹配方法 | |
US11557106B2 (en) | Method and system for testing wearable device | |
CN112207821B (zh) | 视觉机器人的目标搜寻方法及机器人 | |
US11181978B2 (en) | System and method for gaze estimation | |
CN108305321B (zh) | 一种基于双目彩色成像系统的立体人手3d骨架模型实时重建方法和装置 | |
CN104599317A (zh) | 一种实现3d扫描建模功能的移动终端及方法 | |
Reale et al. | Viewing direction estimation based on 3D eyeball construction for HRI | |
WO2023071882A1 (zh) | 人眼注视检测方法、控制方法及相关设备 | |
CN103517060A (zh) | 一种终端设备的显示控制方法及装置 | |
WO2014008320A1 (en) | Systems and methods for capture and display of flex-focus panoramas | |
CN108282650B (zh) | 一种裸眼立体显示方法、装置、系统及存储介质 | |
US20220358724A1 (en) | Information processing device, information processing method, and program | |
JP2023515205A (ja) | 表示方法、装置、端末機器及びコンピュータプログラム | |
CN110909571B (zh) | 一种高精度面部识别空间定位方法 | |
US20200211275A1 (en) | Information processing device, information processing method, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18860603 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/09/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18860603 Country of ref document: EP Kind code of ref document: A1 |