CN109961478A - A kind of Nakedness-yet stereoscopic display method, device and equipment - Google Patents
A kind of Nakedness-yet stereoscopic display method, device and equipment Download PDFInfo
- Publication number
- CN109961478A CN109961478A CN201711420048.8A CN201711420048A CN109961478A CN 109961478 A CN109961478 A CN 109961478A CN 201711420048 A CN201711420048 A CN 201711420048A CN 109961478 A CN109961478 A CN 109961478A
- Authority
- CN
- China
- Prior art keywords
- image
- object localization
- user
- positioning
- eyes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Abstract
The present invention provides a kind of Nakedness-yet stereoscopic display method, device and equipment, the Nakedness-yet stereoscopic display method includes: the positioning image for obtaining predetermined image acquisition device and providing, wherein, it include the image of at least one user in positioning image, include object localization user at least one user, locator markers are provided with object localization user;Determine position of the locator markers in positioning image;According to position of the locator markers in positioning image, the spatial position of the eyes of object localization user is determined;According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out.It can thus be appreciated that, the solution of the present invention, when carrying out space orientation to object localization user, the head hoop of infrared tracker is had in head-mount without object localization user, it is only necessary to which a locator markers are arranged with object localization user, overcome in the prior art by detecting the inconvenient problem for being worn on head hoop of the user's head with infrared tracker to carry out tracking and positioning.
Description
Technical field
The present invention relates to field of locating technology more particularly to a kind of Nakedness-yet stereoscopic display methods, device and equipment.
Background technique
3D high-definition video technology is gradually used in thoracoscopic operation, and doctor need to only wear auxiliary 3D glasses, can be aobvious in 3D
Show the operation screen that stereoscopic effect is seen on device.But assist the thoracoscopic operation of 3D that doctor is needed to wear 3D glasses.Doctor wears
It wears 3D glasses to perform an operation there are many defects, first is that light, after polariscope filters, brightness reduces by 50%, and doctor can feel to regard
Wild partially dark, having seen long can also feel fatigue;Second is that it is various to wear 3D glasses meeting bring for the doctor of uncomfortable wearing spectacles
Sense of discomfort, for example vapor can make that eyeglass is fuzzy, long-time wearing spectacles are to nose, ear bring constriction when breathing,
It is possible to generate dizzy sense in operation;Third is that the doctor usually to wear glasses, wear two pair of glasses simultaneously.
In order to overcome drawbacks described above, tracking mode naked eye 3D display system is introduced in thoracoscopic operation at present, doctor is not
Wearing spectacles are needed, the operation screen of stereoscopic effect can be seen on 3D display device, moreover, the system passes through tracking doctor's
Position of human eye, i.e. viewing location, the output of adjustment display in real time, the viewing location so as to be effectively ensured in doctor change
Afterwards, it the problems such as capable of still watching correct stereo display effect, avoiding the occurrence of anti-view, ghost image, distortion, is provided to doctor
Extraordinary 3D stereoscopic vision viewing experience.In order to carry out the tracking of viewing location, generally requires doctor and wear one as head hoop
The same infrared tracker, by carrying out infrared positioning to head hoop tracker, to realize the tracking of viewing location.
But tracking mode naked eye 3D display system, although the wearing auxiliary various senses of discomfort of 3D glasses bring can be removed from,
But doctor needs one, the band infrared tracker as head hoop, but this tracker still has some problems, such as head hoop
Compressing, needs charging etc. of the tracker to head, user experience is poor.
Summary of the invention
The embodiment provides a kind of Nakedness-yet stereoscopic display method, device and equipment, to solve in the prior art
By detecting the inconvenient problem for being worn on head hoop of the user's head with infrared tracker to carry out tracking and positioning.
The embodiment provides a kind of Nakedness-yet stereoscopic display methods, comprising:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one in the positioning image
The image of user includes object localization user at least one described user, positioning is provided with the object localization user
Marker;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the eyes of the object localization user are determined
Spatial position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that the target is fixed
The display content that position user watches is adapted with the spatial position of the eyes.
Wherein, in above scheme,
The predetermined image acquisition device is three-dimensional body-sensing video camera;
The positioning image is the color image of the three-dimensional body-sensing video camera acquisition;
The position according to the locator markers in the positioning image, determines the double of the object localization user
The step of spatial position of eye, comprising:
The the first application programming interfaces api function for calling the three-dimensional body-sensing video camera to provide, by the locator markers
Position in the color image substitutes into first api function, obtains the three-dimensional that first api function returns, described
Target corresponding with position of the locator markers in the color image in personage's index map that body-sensing video camera provides
Position;
From personage's index map, object localization user call number corresponding with the target position is extracted;
According to the object localization user call number, the target positioning that the three-dimensional body-sensing video camera provides, described is obtained
The corresponding personage's skeleton data of user index number;
According to the corresponding personage's skeleton data of the object localization user call number, the double of the object localization user are determined
The spatial position of eye.
Wherein, in above scheme,
It is described according to the corresponding personage's skeleton data of the object localization user call number, determine the object localization user
Eyes spatial position the step of, comprising:
According to the corresponding personage's skeleton data of the object localization user call number, the head of the object localization user is determined
Portion spatial position;
According to the head space position of the object localization user, second that the three-dimensional body-sensing video camera provides is called
The head space position is substituted into second api function by api function, is obtained that second api function returns, described
Position of the head of object localization user in the color image;
According to position of the head of the object localization user in the color image, the object localization user is determined
Position of the eyes in the color image;
According to position of the eyes of the object localization user in the color image, the object localization user is determined
Eyes spatial position.
Wherein, in above scheme, position of the head according to the object localization user in the color image,
The step of determining position of the eyes of the object localization user in the color image, comprising:
According to position of the head of the object localization user in the color image, detected using face alignment algorithm
The position of the human face characteristic point of the object localization user in the color image;
According to the position of the human face characteristic point detected, determine the eyes of the object localization user in the color image
In position.
Wherein, in above scheme, the three-dimensional body-sensing video camera includes that Kinect somatosensory video camera or Xtion body-sensing are taken the photograph
Camera.
Wherein, in above scheme, it is described according to the locator markers it is described positioning image in position, determine described in
The step of spatial position of the eyes of object localization user, comprising:
According to the locator markers in the position positioned in image and the predetermined locator markers
The position corresponding relationship of the eyes of setting position and the object localization user with the object localization user, determines institute
State position of the eyes of object localization user in the positioning image;
According to position of the eyes of the object localization user in the positioning image, the object localization user is determined
Eyes spatial position.
Wherein, in above scheme,
The determination locator markers it is described positioning image in position the step of, comprising:
According to predetermined machine learning model, position of the locator markers in the positioning image is determined.
Wherein, in above scheme,
It include multiple regions, a kind of color of each area filling, and in arbitrary neighborhood region in the locator markers
The color of filling is different, and the line of demarcation between arbitrary neighborhood region intersects at a point;
The determination locator markers it is described positioning image in position the step of, comprising:
Obtain coordinate of the intersection point in the line of demarcation in the locator markers on the positioning image;
By coordinate of the intersection point on the positioning image, it is determined as the locator markers in the positioning image
Position.
Wherein, in above scheme,
The intersection point for obtaining the line of demarcation in the locator markers the coordinate on the positioning image the step of,
Include:
Based on the positioning image, the prominent figure in the corresponding region of each color in the color in the multiple region is obtained
Picture;
The prominent image in the corresponding region of each color in the color in the multiple region is filtered respectively, is obtained
Take the highlighted filtering image of intersection point corresponding with the prominent image in each region, described;
The value of pixel in each filtering image that will acquire on corresponding position is added or is multiplied, and it is prominent to obtain the intersection point
The target filtering image shown out;
Obtain coordinate of the maximum pixel of value on the target filtering image on the target filtering image, and by its
It is determined as coordinate of the intersection point in the line of demarcation in the locator markers on the positioning image.
Wherein, in above scheme,
The locator markers include with centrosymmetric at least four region of the intersection point in the line of demarcation, arbitrary neighborhood
Red and blue is filled respectively in region;
The step of corresponding region of each color protrudes image in the color for obtaining the multiple region, comprising:
The positioning image is transformed into YUV color space by rgb color space, the first image and the V for obtaining the channel U are logical
Second image in road, using first image in the channel U as the prominent image of blue region, by second image in the channel V
As the prominent image of red area;
Or
The positioning image is transformed into YUV color space by rgb color space, the first image and the V for obtaining the channel U are logical
Second image in road;
Threshold segmentation processing is carried out to first image in the channel U, by pixel value in the first image less than first
The pixel value of the pixel of preset threshold is set as the first preset value, and pixel value is greater than or equal to described first in the first image
The pixel value of the pixel of preset threshold is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region;
Threshold segmentation processing is carried out to second image in the channel V, by pixel value in second image less than second
The pixel value of the pixel of preset threshold is set as the second preset value, and pixel value is greater than or equal to described second in second image
The pixel value of the pixel of preset threshold is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of red area.
The embodiments of the present invention also provide a kind of naked-eye stereoscopic display devices, comprising:
Image collection module, for obtaining the positioning image of predetermined image acquisition device offer, wherein the positioning image
In include at least one user image, include object localization user at least one described user, the target positioning is used
Locator markers are provided with family;
First locating module, for determining position of the locator markers in the positioning image;
Second locating module determines the mesh for the position according to the locator markers in the positioning image
Demarcate the spatial position of the eyes of position user;
Display module carries out bore hole stereoscopic display for the spatial position according to the eyes of the object localization user, with
The display content for watching the object localization user is adapted with the spatial position of the eyes.
Wherein, in above scheme,
The predetermined image acquisition device is three-dimensional body-sensing video camera;
The positioning image is the color image of the three-dimensional body-sensing video camera acquisition;
Second locating module includes:
Function calling cell, the first application programming interfaces api function for calling the three-dimensional body-sensing video camera to provide,
Position of the locator markers in the color image is substituted into first api function, obtains first api function
In personage's index map that three-dimensional body-sensing video camera return, described provides with the locator markers in the color image
The corresponding target position in position;
Call number extraction unit, for it is fixed to extract target corresponding with the target position from personage's index map
Position user index number;
Skeleton data determination unit, for obtaining the three-dimensional body-sensing camera shooting according to the object localization user call number
Machine provides, the corresponding personage's skeleton data of the object localization user call number;
Space orientation unit, for determining institute according to the corresponding personage's skeleton data of the object localization user call number
State the spatial position of the eyes of object localization user.
Wherein, in above scheme,
The space orientation unit includes:
Head locator unit, for determining according to the corresponding personage's skeleton data of the object localization user call number
The head space position of the object localization user;
Function call subelement calls the said three-dimensional body for the head space position according to the object localization user
Feel the second api function that video camera provides, the head space position is substituted into second api function, obtains described second
Position of the head of the object localization user that api function returns, described in the color image;
First eyes locator unit, for position of the head according to the object localization user in the color image
It sets, determines position of the eyes of the object localization user in the color image;
Second eyes locator unit, for position of the eyes according to the object localization user in the color image
It sets, determines the spatial position of the eyes of the object localization user.
Wherein, in above scheme, the first eyes locator unit is specifically used for:
According to position of the head of the object localization user in the color image, detected using face alignment algorithm
The position of the human face characteristic point of the object localization user in the color image;
According to the position of the human face characteristic point detected, determine the eyes of the object localization user in the color image
In position.
Wherein, in above scheme, the three-dimensional body-sensing video camera includes that Kinect somatosensory video camera or Xtion body-sensing are taken the photograph
Camera.
Wherein, in above scheme, second locating module includes:
First determination unit, for the position according to the locator markers in the positioning image, and in advance really
The eyes of setting position and the object localization user of the fixed locator markers with the object localization user
Position corresponding relationship determines position of the eyes of the object localization user in the positioning image;
Second determination unit, for position of the eyes according to the object localization user in the positioning image, really
The spatial position of the eyes of the fixed object localization user.
Wherein, in above scheme, first locating module includes:
First positioning unit, for determining the locator markers described according to predetermined machine learning model
Position the position in image.
Wherein, in above scheme,
It include multiple regions, a kind of color of each area filling, and in arbitrary neighborhood region in the locator markers
The color of filling is different, and the line of demarcation between arbitrary neighborhood region intersects at a point;
First locating module includes:
Intersecting point coordinate acquiring unit, for obtaining the intersection point in the line of demarcation in the locator markers in the positioning figure
As upper coordinate;
Second positioning unit is determined as the telltale mark for the coordinate by the intersection point on the positioning image
Position of the object in the positioning image.
Wherein, in above scheme, the intersecting point coordinate acquiring unit includes:
First image procossing subelement obtains each in the color in the multiple region for being based on the positioning image
The prominent image in the corresponding region of kind color;
Second image procossing subelement, for the corresponding region of each color in the color respectively to the multiple region
Prominent image is filtered, and obtains the highlighted filtering image of intersection point corresponding with the prominent image in each region, described;
Third image procossing subelement, the value phase of the pixel in each filtering image for will acquire on corresponding position
Add or be multiplied, obtains the highlighted target filtering image of the intersection point;
Intersecting point coordinate determines subelement, for obtaining on the target filtering image the maximum pixel of value in the target
Coordinate on filtering image, and the intersection point in the line of demarcation in the locator markers is determined it as on the positioning image
Coordinate.
Wherein, in above scheme,
The locator markers include with centrosymmetric at least four region of the intersection point in the line of demarcation, arbitrary neighborhood
Red and blue is filled respectively in region;
The first image processing subelement is specifically used for:
The positioning image is transformed into YUV color space by rgb color space, the first image and the V for obtaining the channel U are logical
Second image in road, using first image in the channel U as the prominent image of blue region, by second image in the channel V
As the prominent image of red area;
Or
The positioning image is transformed into YUV color space by rgb color space, the first image and the V for obtaining the channel U are logical
Second image in road;
Threshold segmentation processing is carried out to first image in the channel U, by pixel value in the first image less than first
The pixel value of the pixel of preset threshold is set as the first preset value, and pixel value is greater than or equal to described first in the first image
The pixel value of the pixel of preset threshold is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region;
Threshold segmentation processing is carried out to second image in the channel V, by pixel value in second image less than second
The pixel value of the pixel of preset threshold is set as the second preset value, and pixel value is greater than or equal to described second in second image
The pixel value of the pixel of preset threshold is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of red area.
The embodiments of the present invention also provide a kind of bore hole stereoscopic display equipment, comprising:
Processor and memory;
Memory, for storing the computer program that can be performed;
The processor calls the computer program in the memory to execute following steps:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one in the positioning image
The image of user includes object localization user at least one described user, positioning is provided with the object localization user
Marker;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the eyes of the object localization user are determined
Spatial position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that the target is fixed
The display content that position user watches is adapted with the spatial position of the eyes.
The embodiments of the present invention also provide a kind of computer readable storage medium, including computer program, the calculating
Machine program can be executed by processor to complete following steps:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one in the positioning image
The image of user includes object localization user at least one described user, positioning is provided with the object localization user
Marker;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the eyes of the object localization user are determined
Spatial position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that the target is fixed
The display content that position user watches is adapted with the spatial position of the eyes.
The beneficial effect of the embodiment of the present invention is:
Locator markers are arranged with object localization user, and is acquired and is filled by predetermined image for the embodiment of the present invention
Set the positioning image of scene locating for acquisition object localization user, it is first determined go out position of the locator markers in the positioning image
It sets, so that the position according to locator markers in the positioning image, determines the spatial position of the eyes of object localization user.
It follows that the embodiment of the present invention, when carrying out space orientation to object localization user, without object localization user on head
The head hoop for having infrared tracker is worn, can be recognized the need for from multiple users only by a locator markers
Sterically defined object localization user is carried out, and space orientation is carried out to the object localization user according to the locator markers, gram
It has taken and head hoop of the user's head with infrared tracker is worn on by detection to carry out tracking and positioning not in the prior art
Facilitate problem.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 shows the flow charts of the Nakedness-yet stereoscopic display method of the embodiment of the present invention;
Fig. 2 indicates the structural schematic diagram of locator markers in the embodiment of the present invention;
Fig. 3 indicates the structural block diagram of the naked-eye stereoscopic display device of the embodiment of the present invention;
Fig. 4 indicates the structural block diagram of the bore hole stereoscopic display equipment of the embodiment of the present invention;
Fig. 5 indicates the test image of locator markers in the embodiment of the present invention;
Fig. 6 (a) indicates the prominent image of the red area obtained by the test image of Fig. 5;
Fig. 6 (b) indicates the highlighted filtering image of intersection point obtained by the prominent image of red area of Fig. 6 (a);
Fig. 7 (a) indicates the prominent image of the blue region obtained by the test image of Fig. 5;
Fig. 7 (b) indicates the highlighted filtering image of intersection point obtained by the prominent image of blue region of Fig. 7 (a);
Fig. 8 shows the target filtering images obtained by Fig. 6 (b) and Fig. 7 (b).
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
The embodiment provides a kind of Nakedness-yet stereoscopic display methods, as shown in Figure 1, the Nakedness-yet stereoscopic display method
Include:
Step 101: obtaining the positioning image that predetermined image acquisition device provides.
Wherein, include the image of at least one user in positioning image, include that target positioning is used at least one user
Family, object localization user are provided with locator markers with it.
In order to position to object localization user, in the embodiment of the present invention, pass through scheduled image collecting device first
The image for acquiring object localization user may be not only in object localization user region due in many application scenarios
Only one people of object localization user, may be on the scene there are also other users, therefore, in image collecting device acquired image, very
It may include the image of multiple users, i.e., include the image of at least one user, at least one user in the positioning image
Including object localization user.
In the embodiment of the present invention, when carrying out bore hole stereoscopic display to object localization user, needs to position in the target and use
One locator markers are set with family.Wherein, the locator markers are for distinguishing in object localization user and Image Acquisition scene
Other users, that is, for identifying a need for the object localization user of bore hole stereoscopic display from multiple users.Positioning mark
Remember that the setting position of object is unlimited, can be set in the front of user, the positions such as upper arm.For the operation field sayed in background technique
Scape needs to consider the different postures in surgical, so that locator markers are not hidden in image acquisition device image
Gear.
It follows that the embodiment of the present invention is not necessarily to the target when carrying out bore hole stereoscopic display to object localization user
The head hoop that user has infrared tracker in head-mount is positioned, it can be from multiple use only by a locator markers
The object localization user of bore hole stereoscopic display is identified a need in family, so as to avoid head hoop tracker to user's head
It oppresses, need the problems such as charging, greatly facilitate and bore hole stereoscopic display is carried out to object localization user.
In addition, scheduled image collecting device is unlimited, and for the convenience of subsequent positions calculations, above-mentioned predetermined image acquisition
Device can be three-dimensional body-sensing video camera, when shooting a certain scene by three-dimensional body-sensing video camera, it is however generally that, three-dimensional body-sensing is taken the photograph
Camera can provide the skeleton data of personage in the color image, depth image, color image of scene, specifically, for colour
Personage in image, three-dimensional body-sensing video camera can provide the first information, and the first information includes framework information, biological information etc.,
In the first information, the information at each position of the human bodies such as facial information, header information, arm information including personage.It therefore, can be with
The positioning of personage, the privileged site of personage are carried out using these functions of three-dimensional body-sensing video camera, these provided data
Positioning.Detail can be detailed in following explanation.For example, three-dimensional body-sensing video camera can be Kinect somatosensory video camera
Or Xtion body-sensing video camera.
Step 102: determining position of the locator markers in positioning image.
Wherein, locator markers are set to object localization user, and object localization user is in positioning image.Cause
And position of the locator markers in positioning image can be determined first, and then the position according to locator markers in the picture
It sets and bore hole stereoscopic display further is carried out to object localization user.
In addition, determining the concrete mode of position of the locator markers in positioning image, the following two kinds can be used, but not
It is confined to following two.
Mode one: machine learning method determines locator markers fixed that is, by calling predetermined machine learning model
Position in bit image.
I.e. optionally, step 102 includes: to determine that locator markers are positioning according to predetermined machine learning model
Position in image.
Wherein, obtain the image pattern that a large amount of locator markers are in different location in advance, and using these samples into
Row training, obtains machine learning model.After getting positioning image, according to positioning image and predetermined machine learning mould
Type, to determine position of the locator markers in positioning image.For example, the input value of the machine learning model can be to include
The image of locator markers, output valve are the position of locator markers in the images.Therefore, determine what is obtained in step 101
Bit image is input in the machine learning model, i.e., position of the exportable locator markers in the positioning image.
Mode two: image treating determines locator markers in positioning figure that is, by the image procossing to positioning image
Position as in.
Optionally, in one embodiment of the invention, locator markers can be set to include multiple regions, each
A kind of color of area filling, and the color filled in arbitrary neighborhood region is different, the line of demarcation intersection between arbitrary neighborhood region
In a bit.In addition, optional, the multiple regions of locator markers can be also arranged around the intersection point in the line of demarcation, i.e. multiple regions
It is distributed in the two sides of the intersection point.For example, as shown in Fig. 2, including four regions 201, each region in locator markers
201 fill a kind of color, and the color filled in arbitrary neighborhood region is different, for example, according to from left to right, from top to bottom suitable
Sequence, region 201 are followed successively by red, blue, blue, red, also, the line of demarcation between arbitrary neighborhood region intersects at a point O,
Intersection point of the multiple regions around line of demarcation is arranged, and two regions are located at the top of intersection point O, and two regions are located at the lower section of intersection point O.
Based on the locator markers of above-mentioned setting, in a step 102:
Coordinate of the intersection point in line of demarcation in locator markers on positioning image can be obtained, by intersection point on positioning image
Coordinate is determined as position of the locator markers in positioning image.
The above-mentioned special designing for utilizing locator markers, carries out the identification of line of demarcation intersection point, thus by the intersection point fixed
Position in bit image is determined as position of the locator markers in positioning image.
Further, the intersection point in line of demarcation, can the step of positioning the coordinate on image in above-mentioned acquisition locator markers
To include:
Firstly, obtaining the corresponding region of each color in the color of locator markers multiple regions based on positioning image
Prominent image;Wherein, the prominent image in region refers to, the color phase in positioning image with locator markers is highlighted in the image
The region of same color, and other regions are inhibited.That is, handle positioning image, by position in image with it is fixed
The corresponding region of each color is found in the color of position marker.The multiple regions that locator markers include are filled with several face
Color can be obtained the prominent image in a few width regions.Assuming that locator markers include red area and blue region, it is right in this step
Positioning image is handled, and obtains highlighting the blue region in positioning image in the image of red area and prominent positioning image
Image.
Then, the prominent image in the corresponding region of each color in the color of multiple regions is filtered respectively,
Obtain, intersection point highlighted filtering image corresponding with the prominent image in each region;
In the multiple regions for including due to locator markers, the line of demarcation between arbitrary neighborhood region intersects at a point, because
This can obtain the highlighted filtering image of the intersection point, the prominent figure in each region for the prominent image in each region by filtering
As a highlighted filtering image of intersection point can be obtained.It should be noted that step filtering needs to consider telltale mark
The distribution mode in each color region of object can design filter operator based on the distribution mode, real by the operation of filter operator
Existing intersection point enhancing, to realize the highlighted purpose of intersection point.
Then, the value of the pixel in each filtering image that will acquire on corresponding position is added or is multiplied, and obtains intersection point
Highlighted target filtering image, to obtain on target filtering image the maximum pixel of value on target filtering image
Coordinate, and determine it as coordinate of the intersection point in line of demarcation in locator markers on positioning image.
It can be seen from the above, the color filled in arbitrary neighborhood region is not in the multiple regions for including due to locator markers
Together, so, by the image procossing to positioning image, a certain Fill Color for only highlighting multiple regions can be obtained
Region, depending on bit image the prominent image in the width region that is inhibited of other regions.In addition, locator markers include it is more
A area filling has several colors, can be obtained the prominent image in a few width regions.
In addition, the line of demarcation between arbitrary neighborhood region intersects at one in the multiple regions for including due to locator markers
Point, so, by the filtering processing of the prominent image in each width region to above-mentioned acquisition, each width region can be highlighted out
Intersection point on prominent image.
In addition, on each highlighted filtering image of width intersection point of above-mentioned acquisition, the position of intersection point be it is identical, because
And the target filtering image obtained after the value of the pixel on each filtering image corresponding position of acquisition is added or is multiplied
In, the maximum pixel of pixel value is the line of demarcation intersection point in locator markers, and thus, on the target filtering image value is most
Coordinate where big pixel, coordinate of the intersection point in line of demarcation on positioning image as in locator markers.
It is understood that the more of locator markers are utilized in mode of the above-mentioned determining locator markers on positioning image
The different colours in a region, therefore, for the more accurate position for advantageously determining locator markers on positioning image, more preferably
Distinguish color region identical with the color of locator markers on positioning image in ground, it is desirable that in the design of locator markers, phase
The different colours in neighbouring region will have significant difference, i.e. the chromatic value difference of the Fill Color of adjacent area is big that is, adjacent
The difference of the chromatic value of the color in region is greater than a threshold value, for example, the Fill Color of adjacent area can be red and blue.Again
For example, specifically, adjacent area can choose complementary colours, for example, yellow and blue, red and cyan etc..
It is further described below by specific example.Assuming that locator markers include the intersection point center with line of demarcation
Red and blue is filled in symmetrical four regions, the region of arbitrary neighborhood respectively;Certainly, region is not limited to four, can be for extremely
It is four few, intersection point central symmetry of at least four region about line of demarcation.For convenience of description, it is said by taking four regions as an example
It is bright.According to sequence from top to bottom from left to right, red, blue, blue, red are filled in this four regions respectively.The positioning mark
Remember that object for example can be Fig. 2.
Firstly, obtaining the prominent image in the corresponding region of each color in the color of multiple regions, that is, obtain positioning image
In the prominent image of red area and the prominent image of blue region.Specifically, positioning image can be transformed by rgb color space
YUV color space obtains first image in the channel U and second image in the channel V, using first image in the channel U as blue region
Domain protrudes image, using second image in the channel V as the prominent image of red area;In order to preferably protrude blue region and red
Region also further can carry out Threshold segmentation processing by the first image to the channel U, and pixel value in the first image is pre- less than first
If the pixel value of the pixel of threshold value is set as the first preset value, pixel value is greater than or equal to the first preset threshold in the first image
The pixel value of pixel is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region, to the of the channel V
Two images carry out Threshold segmentation processing, set pixel value in the second image to less than the pixel value of the pixel of the second preset threshold
Second preset value, in the second image pixel value be greater than or equal to the pixel of the second preset threshold pixel value it is constant, and by threshold value
The image obtained after dividing processing is as the prominent image of red area.
It specifically, can be according to the following formula
Positioning image is transformed into YUV color space by rgb color space.
It can be seen from the above, after positioning image is transformed into YUV color space by rgb color space, it can be directly by the U of acquisition
First image in channel is as the prominent image of blue region, using second image in the channel V of acquisition as the prominent figure of red area
Picture;The image obtained after first image in the channel U can also being carried out Threshold segmentation is as the prominent image of blue region, by the channel V
The second image carry out Threshold segmentation after the image that obtains as red area protrusion image.Wherein, to first image in the channel U
The purpose for carrying out Threshold segmentation is in order to which place darker in blue component is curbed, so that the image obtained is in vision
It is apparent in effect;The purpose for carrying out Threshold segmentation to second image in the channel V is in order to place darker in red component
It curbs, so that the image obtained is apparent in visual effect.
Then, the prominent image in the corresponding region of each color in the color of multiple regions is filtered respectively,
Obtain, intersection point highlighted filtering image corresponding with the prominent image in each region.
Specifically, the first filter operator can be used for the prominent image of blue region
The prominent image of blue region is filtered, intersection point corresponding with the prominent image of blue region is obtained and protrudes
First filtering image of display;
Specifically, the second filter operator can be used for the prominent image of red area
The prominent image of red area is filtered, intersection point corresponding with the prominent image of red area is obtained and protrudes
Second filtering image of display.
Wherein, the prominent image of blue region is filtered using the first filter operator, as using the first filtering
The matrix that operator constitutes the value of the pixel of the prominent image of blue region carries out convolution algorithm;Using the second filter operator to red
The prominent image in color region is filtered, as using the second filter operator to the value of the pixel of the prominent image of red area
The matrix of composition carries out convolution algorithm.
It will be appreciated by persons skilled in the art that point of the set-up mode of filter operator and red area and blue region
Mode for cloth is related, to achieve the purpose that intersection point enhances by filtering processing.For the locator markers in this example, such as Fig. 2
It is shown, it is from top to bottom from left to right red, blue, blue, red, then uses above-mentioned filter operator, and in locator markers,
If red area and blue region are exchanged, above-mentioned filter operator also will be exchanged accordingly.
I.e. when filter process needs to highlight the intersection point in the prominent image of red area, the filter process
The position for the element that value is 1 in used filter operator is opposite with the distributing position of the red area in locator markers
It answers, the position for the element that value is -1 is corresponding with the distributing position of the blue region in locator markers.Similarly, at filtering
When reason process needs to highlight the intersection point in the prominent image of blue region, in filter operator used by the filter process
The position for the element that value is 1 is corresponding with the distributing position of the blue region in locator markers, the element that value is -1
Position is corresponding with the distributing position of the red area in locator markers.
In addition, color change, then corresponding filter operator is also required to if the quantity in region included by locator markers changes
It is adaptively adjusted.
Referring to Fig. 5, Fig. 5 is the test image of a locator markers, and locator markers are located at the lower right corner of test image,
It draws a circle to approve out using circle in Fig. 5, which is the marker in above-mentioned example (i.e. as indicated with 2), including four
Square areas is from top to bottom from left to right red, blue, blue, red area.In order to determine locator markers in the picture
Position, obtain the prominent image of red area and the prominent image of blue region of the test image, the prominent figure of red area first
As referring to Fig. 6 (a), greyish white part is the red area of test image, the region of black be test image red area it
Other outer regions, the prominent image of blue region is referring to Fig. 7 (a), and greyish white part is the blue region of test image, black
Region is other regions except the blue region of test image.Then to the prominent image of red area and the prominent figure of blue region
As being filtered, the highlighted filtering image of intersection point is obtained, the intersection point that the prominent image of red area obtains highlights
Filtering image referring to Fig. 6 (b), the intersection point filtering image outstanding that the prominent image of blue region obtains is based on referring to Fig. 7 (b)
Fig. 6 (b) and Fig. 7 (b) in image it is found that only have the intersection point of locator markers to highlight, other parts are substantially completely black.Then,
Fig. 6 (b) is added with the value of Fig. 7 (b) pixel, also can be multiplied certainly, obtain target filtering image, target filtering image referring to
Fig. 8, the white bright display of the line of demarcation intersection point of locator markers.Based on target filtering image, the pixel coordinate of intersection point can be obtained.
Step 103: according to position of the locator markers in positioning image, determining the space of the eyes of object localization user
Position.
102 determine that locator markers, then can be further according to fixed behind the position in positioning image through the above steps
Position of the position marker in positioning image, determines the spatial position of the eyes of object localization user.
Wherein, the position according to locator markers in positioning image, determines the space bit of the eyes of object localization user
The concrete mode set can be used the following two kinds, but be not limited to following two.
Mode one:
According to locator markers positioning image in position and predetermined locator markers target position use
The position corresponding relationship of the eyes of setting position and object localization user with family determines the eyes of object localization user fixed
Position in bit image;
According to position of the eyes of object localization user in positioning image, the space of the eyes of object localization user is determined
Position.
In i.e. such method of determination, it is known that setting position of the locator markers with object localization user and the target are fixed
The position corresponding relationship of the eyes of position user, such as known locations marker are located at the azran of the eyes of object localization user
From, then positioning image in, according to locator markers positioning image in position and locator markers and target positioning use
The azimuth-range of the eyes at family can determine position of the eyes in positioning image, and then eyes can positioned
Position in image is transformed into spatial position.
Mode two:
In the case where predetermined image acquisition device is three-dimensional body-sensing video camera, for example, Kinect somatosensory video camera, it can benefit
The function provided by three-dimensional body-sensing video camera, determines the spatial position of the eyes of object localization user.
Specifically, positioning image is the color image of three-dimensional body-sensing video camera acquisition, determining that locator markers are positioning
Behind position in image, that is, color image, the first application programming interfaces api function that three-dimensional body-sensing video camera can be called to provide,
Position of the locator markers in color image is substituted into the first api function, obtains body-sensing that the first api function returns, three-dimensional
Target position corresponding with position of the locator markers in color image in personage's index map that video camera provides;
From personage's index map, object localization user call number corresponding with target position is extracted;
According to object localization user call number, three-dimensional body-sensing video camera provides, object localization user call number pair is obtained
The personage's skeleton data answered;
According to the corresponding personage's skeleton data of object localization user call number, the space of the eyes of object localization user is determined
Position.
I.e. such method of determination, the first api function provided using three-dimensional body-sensing video camera, personage's index map, personage's bone
Rack data determines the eyes of object localization user according to position of the locator markers in positioning image (i.e. color image)
Spatial position.
Wherein, when the shooting of three-dimensional body-sensing video camera includes the scene image of multiple personages, a width cromogram can be obtained
Picture, a width personage index map and personage's skeleton data of the personage in the scene.And the three-dimensional body-sensing video camera can also mention
For the first api function, for the position in coloured picture image is corresponding with the position in personage's index map.Thus, when will position
After position of the marker in color image substitutes into the first api function, then it can obtain with the locator markers in cromogram
The target position in the corresponding personage's index map in position as in.
In addition, there are the call number at the position of personage with corresponding personage in personage's index map, and the call number with
Personage's skeleton data that three-dimensional body-sensing video camera provides is corresponding, therefore, it is possible to obtain and above-mentioned target in personage's index map
The corresponding object localization user call number in position, and then find according to the call number personage's skeleton data of object localization user.
In addition, personage's skeleton data includes the spatial position of the artis of preset quantity with personage.For example, in Kinect
In body-sensing video camera, a skeleton is indicated by 20 artis, when a personage comes into the field range of Kinect
When, Kinect can find the position of 20 artis of the personage, and indicate specific by (x, y, z) coordinate
Spatial position.Thus, it, then can be according to object localization user after getting personage's skeleton data of object localization user
Personage's skeleton data determines the spatial position of the eyes of object localization user.
For example, above-mentioned according to the corresponding personage's skeleton data of object localization user call number, determine that target positioning is used
The step of spatial position of the eyes at family, comprising:
According to the corresponding personage's skeleton data of object localization user call number, the head space position of object localization user is determined
It sets;
According to the head space position of object localization user, the second api function for calling three-dimensional body-sensing video camera to provide will
Head space position substitutes into the second api function, obtains head that the second api function returns, object localization user in cromogram
Position as in;
According to position of the head of object localization user in color image, determine the eyes of object localization user in colour
Position in image;
According to position of the eyes of object localization user in color image, the space of the eyes of object localization user is determined
Position.
From the above, it is necessary first to head space position be determined according to personage's skeleton data, then by head space
Position is transformed into color image, so that the position according to head in color image, identifies human eye, determines human eye in coloured silk
Position in chromatic graph picture, and then position of the human eye in color image is transformed into three-dimensional space, to obtain target positioning
The spatial position of the eyes of user.
It wherein, include the head space position of personage in personage's skeleton data, therefore, it is possible to from the people of object localization user
The head space position of the object localization user is extracted in object skeleton data.
In addition, three-dimensional body-sensing video camera can also provide the second api function, for a certain position in space to be transformed into
Position in its color image shot.Thus, the head space position of object localization user is substituted into the second api function, then
Position of the head of object localization user in color image can be exported.
In addition, position of the human eye in color image is transformed into three-dimensional space, as the coordinate in two dimensional image with
The conversion of three dimensional space coordinate.Wherein, it for the transformational relation of the coordinate system of two dimensional image and three-dimensional coordinate system, can adopt
With any one in the prior art, it is not especially limited in the embodiment of the present invention.
Further, the above-mentioned position according to the head of object localization user in color image determines that target positioning is used
The step of position of the eyes at family in color image, comprising:
According to position of the head of object localization user in color image, using face alignment algorithm sense colors image
In object localization user human face characteristic point position;
According to the position of the human face characteristic point detected, position of the eyes of object localization user in color image is determined
It sets.
Wherein, face alignment algorithm is to make face shape from current shape by establishing a cascade residual error regression tree
It revert to true shape step by step.A residual error is store on each leaf node of each residual error regression tree to return
Amount, as soon as being added to and changing in input by residual error when input is fallen on a node, plays the purpose of recurrence, finally by all residual errors
It is superimposed, just completes the purpose of face alignment.After completing face alignment, then the eye that can be automatically found on face
The symbolic characteristics positions such as eyeball, nose, mouth and face profile, face alignment algorithm can be found in the prior art, and which is not described herein again.
Step 104, according to the spatial position of the eyes of object localization user, bore hole stereoscopic display is carried out, so that target is fixed
The display content that position user watches is adapted with the spatial position of eyes.
In the spatial position for the eyes that object localization user has been determined, that is, will be according to eyes after its space viewing location
Spatial position carries out stereoscopic display, so that display content and user's is adapted, guarantees correct stereo display effect, effectively keeps away
Exempt from the problems such as anti-view, crosstalk, ghost image occur.It should be noted that naked-eye stereoscopic display generally include display panel and with it is aobvious
Show that the light-splitting device that panel is oppositely arranged, display panel and light-splitting device are oppositely arranged, such as light-splitting device can be grating, it should
Grating can be any one light that naked-eye stereoscopic display can use in the prior art such as slit grating or lenticulation
Grid, which is not limited by the present invention.When carrying out bore hole stereoscopic display, left eye picture and the arrangement display of right eye picture are being shown
On panel (i.e. row's figure), the light splitting of light-splitting device is cooperated to act on, accomplishes that (i.e. target is fixed by left eye picture feeding object localization user
Position user) left eye, by right eye picture be sent into object localization user right eye, so that object localization user be made to watch three-dimensional shadow
Picture.
In order to which the display content for watching object localization user is adapted with the viewing location of object localization user, will be based on
The eyes spatial position got carries out stereo-picture, i.e. row's figure of left-eye image and eye image is shown.It specifically can basis
The spatial position determines row's graph parameter, such as row's figure period etc., carries out the streams such as row's figure of left and right stereo-picture according to row's graph parameter
Journey, to carry out stereoscopic display.Can be mobile in object localization user face, i.e., when viewing location changes, according to tracking
To eyes spatial position adaptability carry out display adjustment, reach what the viewing location of tracking object localization user was shown
Purpose.
Wherein, specific row's figure process can be found in the prior art, and any known mode can be used, according to identified eyes
Spatial position come determine tracking row graph parameter, for example, preset spatial position and tracking row graph parameter respective function relationship,
After determining eyes spatial position, eyes spatial position is substituted into functional relation, so that it is determined that row's graph parameter.Certainly, according to mesh
The method that the eyes spatial positional information progress three-dimensional of calibration position user is shown is unlimited, and those skilled in the art can arbitrarily select
It selects, which is not described herein again.
In conclusion the Nakedness-yet stereoscopic display method of the embodiment of the present invention, specific embodiment are exemplified below:
In present embodiment, the image including at least one user is shot using Kinect somatosensory video camera, wherein at least
Include object localization user in one user, locator markers is provided with object localization user, as shown in Fig. 2, the positioning
It include four regions 201 on marker, a kind of color is filled in each region 201, and the color filled in arbitrary neighborhood region
Difference, for example, according to from left to right, sequence from top to bottom, region 201 is followed successively by red, blue, blue, red, also,
Line of demarcation between arbitrary neighborhood region intersects at a point O, and intersection point of the multiple regions around line of demarcation is arranged, and two regions are located at
The top of intersection point O, two regions are located at the lower section of intersection point O.
Specifically, the Nakedness-yet stereoscopic display method in present embodiment includes the following steps 501~517:
Step 501: obtain Kinect somatosensory video camera shooting color image, wherein include in the color image to
The image of a few user includes object localization user at least one user, positioning mark is provided with object localization user
Remember object.
Step 502: color image being transformed into YUV color space by rgb color space, obtains first image in the channel U
With second image in the channel V.
Step 503: Threshold segmentation processing being carried out to first image in the channel U, by pixel value in the first image less than first
The pixel value of the pixel of preset threshold is set as the first preset value, and pixel value is greater than or equal to the first preset threshold in the first image
Pixel pixel value it is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region.
Step 504: Threshold segmentation processing being carried out to second image in the channel V, by pixel value in the second image less than second
The pixel value of the pixel of preset threshold is set as the second preset value, and pixel value is greater than or equal to the second preset threshold in the second image
Pixel pixel value it is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of red area.
Step 505: using the first filter operatorTo blue region
Prominent image is filtered, and obtains, intersection point highlighted first filtering image corresponding with the prominent image of blue region.
Step 506: using the second filter operatorTo red area
Prominent image is filtered, and obtains, intersection point highlighted second filtering image corresponding with the prominent image of red area.
Step 507: the value of the pixel in the first filtering image and the second filtering image that will acquire on corresponding position
It is added or is multiplied, obtain the highlighted target filtering image of intersection point.
Step 508: obtain target filtering image on coordinate of the maximum pixel of value on target filtering image, and by its
It is determined as coordinate of the intersection point in line of demarcation in locator markers on color image.
Step 509: by coordinate of the intersection point on color image, being determined as position of the locator markers in color image.
Step 510: the first api function for calling Kinect somatosensory video camera to provide, by locator markers in color image
In position substitute into the first api function, obtain the first api function return, Kinect somatosensory video camera provide personage index
Target position corresponding with position of the locator markers in color image in figure.
Step 511: from personage's index map, extracting object localization user call number corresponding with target position.
Step 512: according to object localization user call number, obtaining Kinect somatosensory video camera provides, target positioning and use
The corresponding personage's skeleton data of family call number.
Step 513: according to the corresponding personage's skeleton data of object localization user call number, determining the head of object localization user
Portion spatial position.
Step 514: according to the head space position of object localization user, calling second that Kinect somatosensory video camera provides
Head space position is substituted into the second api function by api function, obtains the head of the second api function returns object localization user
Position of the portion in color image.
Step 515: according to position of the head of object localization user in color image, being detected using face alignment algorithm
The position of the human face characteristic point of object localization user in color image.
Step 516: according to the position of the human face characteristic point detected, determining the eyes of object localization user in color image
In position.
Step 517: according to position of the eyes of object localization user in color image, determining the double of object localization user
The spatial position of eye.
It can be seen from the above, the embodiment of the present invention, is arranged locator markers, and by predetermined with object localization user
The positioning image of scene locating for image acquisition device object localization user, it is first determined go out locator markers in the positioning figure
Position as in, so that the position according to locator markers in the positioning image, determines the eyes of object localization user
Spatial position.It follows that the embodiment of the present invention is positioned when carrying out space orientation to object localization user without target
User has the head hoop of infrared tracker in head-mount, can be from multiple users only by a locator markers
Sterically defined object localization user is identified a need for, and sky is carried out to the object localization user according to the locator markers
Between position, overcome and be worn on user's head by detecting in the prior art and tracked with the head hoop of infrared tracker
The inconvenient problem of positioning.
The embodiments of the present invention also provide a kind of naked-eye stereoscopic display devices, as shown in figure 3, the bore hole stereoscopic display fills
It sets and includes:
Image collection module 301, for obtaining the positioning image of predetermined image acquisition device offer, wherein the positioning
Include the image of at least one user in image, include object localization user at least one described user, the target is fixed
Position user is provided with locator markers with it;
First locating module 302, for determining position of the locator markers in the positioning image;
Second locating module 303, described in, in the position positioned in image, being determined according to the locator markers
The spatial position of the eyes of object localization user.
It is aobvious to carry out naked eye solid for the spatial position according to the eyes of the object localization user for display module 304
Show, so that the display content that the object localization user is watched is adapted with the spatial position of the eyes.
Preferably, the predetermined image acquisition device is three-dimensional body-sensing video camera;
The positioning image is the color image of the three-dimensional body-sensing video camera acquisition;
Second locating module 303 includes:
Function calling cell, the first application programming interfaces api function for calling the three-dimensional body-sensing video camera to provide,
Position of the locator markers in the color image is substituted into first api function, obtains first api function
In personage's index map that three-dimensional body-sensing video camera return, described provides with the locator markers in the color image
The corresponding target position in position;
Call number extraction unit, for it is fixed to extract target corresponding with the target position from personage's index map
Position user index number;
Skeleton data determination unit, for obtaining the three-dimensional body-sensing camera shooting according to the object localization user call number
Machine provides, the corresponding personage's skeleton data of the object localization user call number;
Space orientation unit, for determining institute according to the corresponding personage's skeleton data of the object localization user call number
State the spatial position of the eyes of object localization user.
Preferably, the space orientation unit includes:
Head locator unit, for determining according to the corresponding personage's skeleton data of the object localization user call number
The head space position of the object localization user;
Function call subelement calls the said three-dimensional body for the head space position according to the object localization user
Feel the second api function that video camera provides, the head space position is substituted into second api function, obtains described second
Position of the head of the object localization user that api function returns, described in the color image;
First eyes locator unit, for position of the head according to the object localization user in the color image
It sets, determines position of the eyes of the object localization user in the color image;
Second eyes locator unit, for position of the eyes according to the object localization user in the color image
It sets, determines the spatial position of the eyes of the object localization user.
Preferably, the first eyes locator unit is specifically used for:
According to position of the head of the object localization user in the color image, detected using face alignment algorithm
The position of the human face characteristic point of the object localization user in the color image;
According to the position of the human face characteristic point detected, determine the eyes of the object localization user in the color image
In position.
Preferably, the three-dimensional body-sensing video camera includes Kinect somatosensory video camera or Xtion body-sensing video camera.
Preferably, second locating module 303 includes:
First determination unit, for the position according to the locator markers in the positioning image, and in advance really
The eyes of setting position and the object localization user of the fixed locator markers with the object localization user
Position corresponding relationship determines position of the eyes of the object localization user in the positioning image;
Second determination unit, for position of the eyes according to the object localization user in the positioning image, really
The spatial position of the eyes of the fixed object localization user.
Preferably, first locating module 302 includes:
First positioning unit, for determining the locator markers described according to predetermined machine learning model
Position the position in image.
It preferably, include multiple regions, a kind of color of each area filling, and arbitrary neighborhood in the locator markers
The color filled in region is different, and the line of demarcation between arbitrary neighborhood region intersects at a point;
First locating module 302 includes:
Intersecting point coordinate acquiring unit, for obtaining the intersection point in the line of demarcation in the locator markers in the positioning figure
As upper coordinate;
Second positioning unit is determined as the telltale mark for the coordinate by the intersection point on the positioning image
Position of the object in the positioning image.
Preferably, the intersecting point coordinate acquiring unit includes:
First image procossing subelement obtains each in the color in the multiple region for being based on the positioning image
The prominent image in the corresponding region of kind color;
Second image procossing subelement, for the corresponding region of each color in the color respectively to the multiple region
Prominent image is filtered, and obtains the highlighted filtering image of intersection point corresponding with the prominent image in each region, described;
Third image procossing subelement, the value phase of the pixel in each filtering image for will acquire on corresponding position
Add or be multiplied, obtains the highlighted target filtering image of the intersection point;
Intersecting point coordinate determines subelement, for obtaining on the target filtering image the maximum pixel of value in the target
Coordinate on filtering image, and the intersection point in the line of demarcation in the locator markers is determined it as on the positioning image
Coordinate.
Preferably, the locator markers include being appointed with centrosymmetric at least four region of the intersection point in the line of demarcation
Red and blue is filled in adjacent region of anticipating respectively;
The first image processing subelement is specifically used for:
The positioning image is transformed into YUV color space by rgb color space, the first image and the V for obtaining the channel U are logical
Second image in road, using first image in the channel U as the prominent image of blue region, by second image in the channel V
As the prominent image of red area;
Or
The positioning image is transformed into YUV color space by rgb color space, the first image and the V for obtaining the channel U are logical
Second image in road;
Threshold segmentation processing is carried out to first image in the channel U, by pixel value in the first image less than first
The pixel value of the pixel of preset threshold is set as the first preset value, and pixel value is greater than or equal to described first in the first image
The pixel value of the pixel of preset threshold is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region;
Threshold segmentation processing is carried out to second image in the channel V, by pixel value in second image less than second
The pixel value of the pixel of preset threshold is set as the second preset value, and pixel value is greater than or equal to described second in second image
The pixel value of the pixel of preset threshold is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of red area.
It can be seen from the above, the embodiment of the present invention, is arranged locator markers, and by predetermined with object localization user
The positioning image of scene locating for image acquisition device object localization user, it is first determined go out locator markers in the positioning figure
Position as in, so that the position according to locator markers in the positioning image, determines the eyes of object localization user
Spatial position.It follows that the embodiment of the present invention is not necessarily to target when carrying out bore hole stereoscopic display to object localization user
The head hoop that user has infrared tracker in head-mount is positioned, it can be from multiple use only by a locator markers
The object localization user of bore hole stereoscopic display is identified a need in family, and the target is positioned according to the locator markers and is used
Family carries out bore hole stereoscopic display, overcomes and is worn on the head that user's head has infrared tracker by detection in the prior art
It binds round to carry out the inconvenient problem of tracking and positioning.
The embodiments of the present invention also provide a kind of bore hole stereoscopic display equipment, as shown in figure 4, the bore hole stereoscopic display is set
It is standby to include:
Processor 401 and memory 402;
Memory 402, for storing the computer program that can be performed;
The processor 401 calls the computer program in the memory 402 to execute following steps:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one in the positioning image
The image of user includes object localization user at least one described user, positioning is provided with the object localization user
Marker;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the eyes of the object localization user are determined
Spatial position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that the target is fixed
The display content that position user watches is adapted with the spatial position of the eyes.
Therefore, the bore hole stereoscopic display equipment of the embodiment of the present invention, by obtain predetermined image acquisition, be provided with positioning
The positioning image of scene locating for the object localization user of marker, so that it is determined that position of the locator markers in the positioning image out
It sets, and then the position according to locator markers in the positioning image, determines the spatial position of the eyes of object localization user.
It follows that the embodiment of the present invention exists when carrying out bore hole stereoscopic display to object localization user without object localization user
Head-mount has the head hoop of infrared tracker, can identify from multiple users only by a locator markers
It needs to carry out sterically defined object localization user, and space is carried out to the object localization user according to the locator markers and is determined
Position overcomes and is worn on head hoop of the user's head with infrared tracker by detection to carry out tracking and positioning in the prior art
Inconvenient problem.
The embodiments of the present invention also provide a kind of computer readable storage medium, including computer program, the calculating
Machine program can be executed by processor to complete following steps:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one in the positioning image
The image of user includes object localization user at least one described user, positioning is provided with the object localization user
Marker;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the eyes of the object localization user are determined
Spatial position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that the target is fixed
The display content that position user watches is adapted with the spatial position of the eyes.
Above-mentioned computer readable storage medium of the invention, storage program, can obtain predetermined image acquisition, set
It is equipped with the positioning image of scene locating for the object localization user of locator markers, so that it is determined that locator markers are in the positioning figure out
Position as in, and then the position according to locator markers in the positioning image, determine the eyes of object localization user
Spatial position.It follows that the embodiment of the present invention is not necessarily to target when carrying out bore hole stereoscopic display to object localization user
The head hoop that user has infrared tracker in head-mount is positioned, it can be from multiple use only by a locator markers
Identify a need for sterically defined object localization user in family, and according to the locator markers to the object localization user into
Row space orientation overcomes and is worn on head hoop of the user's head with infrared tracker by detection to carry out in the prior art
The inconvenient problem of tracking and positioning.
Above-described is the preferred embodiment of the present invention, it should be pointed out that the ordinary person of the art is come
It says, can also make several improvements and retouch under the premise of not departing from principle of the present invention, these improvements and modifications also exist
In protection scope of the present invention.
Claims (22)
1. a kind of Nakedness-yet stereoscopic display method characterized by comprising
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one user in the positioning image
Image, include object localization user at least one described user, telltale mark be provided with the object localization user
Object;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the space of the eyes of the object localization user is determined
Position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that target positioning is used
The display content that family is watched is adapted with the spatial position of the eyes.
2. the method according to claim 1, wherein
The predetermined image acquisition device is three-dimensional body-sensing video camera;
The positioning image is the color image of the three-dimensional body-sensing video camera acquisition;
The position according to the locator markers in the positioning image, determines the eyes of the object localization user
The step of spatial position, comprising:
The the first application programming interfaces api function for calling the three-dimensional body-sensing video camera to provide, by the locator markers in institute
The position stated in color image substitutes into first api function, obtains the three-dimensional body-sensing that first api function returns, described
Target position corresponding with position of the locator markers in the color image in personage's index map that video camera provides;
From personage's index map, object localization user call number corresponding with the target position is extracted;
According to the object localization user call number, the object localization user that the three-dimensional body-sensing video camera provides, described is obtained
The corresponding personage's skeleton data of call number;
According to the corresponding personage's skeleton data of the object localization user call number, the eyes of the object localization user are determined
Spatial position.
3. according to the method described in claim 2, it is characterized in that,
It is described according to the corresponding personage's skeleton data of the object localization user call number, determine the double of the object localization user
The step of spatial position of eye, comprising:
According to the corresponding personage's skeleton data of the object localization user call number, determine that the head of the object localization user is empty
Between position;
According to the head space position of the object localization user, the 2nd API letter for calling the three-dimensional body-sensing video camera to provide
The head space position is substituted into second api function by number, and it is fixed to obtain the target that second api function returns, described
Position of the head of position user in the color image;
According to position of the head of the object localization user in the color image, the double of the object localization user are determined
Position of the eye in the color image;
According to position of the eyes of the object localization user in the color image, the double of the object localization user are determined
The spatial position of eye.
4. according to the method described in claim 3, it is characterized in that, the head according to the object localization user is described
Position in color image, the step of determining position of the eyes of the object localization user in the color image, comprising:
According to position of the head of the object localization user in the color image, using described in the detection of face alignment algorithm
The position of the human face characteristic point of the object localization user in color image;
According to the position of the human face characteristic point detected, determine the eyes of the object localization user in the color image
Position.
5. according to the method described in claim 2, it is characterized in that, the three-dimensional body-sensing video camera includes Kinect somatosensory camera shooting
Machine or Xtion body-sensing video camera.
6. the method according to claim 1, wherein it is described according to the locator markers in the positioning image
In position, the step of determining the spatial position of the eyes of the object localization user, comprising:
According to the locator markers in the position positioned in image and the predetermined locator markers in institute
The position corresponding relationship for stating the eyes of the setting position and the object localization user with object localization user, determines the mesh
Demarcate position of the eyes of position user in the positioning image;
According to position of the eyes of the object localization user in the positioning image, the double of the object localization user are determined
The spatial position of eye.
7. method according to any one of claims 1 to 6, which is characterized in that
The determination locator markers it is described positioning image in position the step of, comprising:
According to predetermined machine learning model, position of the locator markers in the positioning image is determined.
8. method according to any one of claims 1 to 6, which is characterized in that
It include multiple regions, a kind of color of each area filling, and filling in arbitrary neighborhood region in the locator markers
Color it is different, the line of demarcation between arbitrary neighborhood region intersects at a point;
The determination locator markers it is described positioning image in position the step of, comprising:
Obtain coordinate of the intersection point in the line of demarcation in the locator markers on the positioning image;
By coordinate of the intersection point on the positioning image, it is determined as position of the locator markers in the positioning image
It sets.
9. according to the method described in claim 8, it is characterized in that,
The intersection point for obtaining the line of demarcation in the locator markers is the coordinate on the positioning image the step of, packet
It includes:
Based on the positioning image, the prominent image in the corresponding region of each color in the color in the multiple region is obtained;
The prominent image in the corresponding region of each color in the color in the multiple region is filtered respectively, obtain with
The prominent image in each region is corresponding, the highlighted filtering image of intersection point;
The value of pixel in each filtering image that will acquire on corresponding position is added or is multiplied, and it is prominent aobvious to obtain the intersection point
The target filtering image shown;
Coordinate of the maximum pixel of value on the target filtering image on the target filtering image is obtained, and is determined
For the line of demarcation in the locator markers intersection point it is described positioning image on coordinate.
10. according to the method described in claim 9, it is characterized in that,
The locator markers include with centrosymmetric at least four region of the intersection point in the line of demarcation, the region of arbitrary neighborhood
Filling is red and blue respectively;
The step of corresponding region of each color protrudes image in the color for obtaining the multiple region, comprising:
The positioning image is transformed into YUV color space by rgb color space, obtains the first image and the channel V in the channel U
Second image, using first image in the channel U as the prominent image of blue region, using second image in the channel V as
Red area protrudes image;
Or
The positioning image is transformed into YUV color space by rgb color space, obtains the first image and the channel V in the channel U
Second image;
Threshold segmentation processing is carried out to first image in the channel U, pixel value in the first image is preset less than first
The pixel value of the pixel of threshold value is set as the first preset value, and it is default to be greater than or equal to described first for pixel value in the first image
The pixel value of the pixel of threshold value is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region;
Threshold segmentation processing is carried out to second image in the channel V, pixel value in second image is preset less than second
The pixel value of the pixel of threshold value is set as the second preset value, and it is default to be greater than or equal to described second for pixel value in second image
The pixel value of the pixel of threshold value is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of red area.
11. a kind of naked-eye stereoscopic display device characterized by comprising
Image collection module, for obtaining the positioning image of predetermined image acquisition device offer, wherein wrapped in the positioning image
Image containing at least one user includes object localization user, the object localization user body at least one described user
On be provided with locator markers;
First locating module, for determining position of the locator markers in the positioning image;
Second locating module determines that the target is fixed for the position according to the locator markers in the positioning image
The spatial position of the eyes of position user;
Display module carries out bore hole stereoscopic display, so that institute for the spatial position according to the eyes of the object localization user
The display content that object localization user is watched is stated to be adapted with the spatial position of the eyes.
12. device according to claim 11, which is characterized in that
The predetermined image acquisition device is three-dimensional body-sensing video camera;
The positioning image is the color image of the three-dimensional body-sensing video camera acquisition;
Second locating module includes:
Function calling cell, the first application programming interfaces api function for calling the three-dimensional body-sensing video camera to provide, by institute
It states position of the locator markers in the color image and substitutes into first api function, obtain first api function and return
, position in personage's index map that the three-dimensional body-sensing video camera provides with the locator markers in the color image
Corresponding target position;
Call number extraction unit is used for from personage's index map, extracting target positioning corresponding with the target position
Family call number;
Skeleton data determination unit, for obtaining the three-dimensional body-sensing video camera and mentioning according to the object localization user call number
Supply, the corresponding personage's skeleton data of the object localization user call number;
Space orientation unit, for determining the mesh according to the corresponding personage's skeleton data of the object localization user call number
Demarcate the spatial position of the eyes of position user.
13. device according to claim 12, which is characterized in that
The predetermined position is human eye;
The space orientation unit includes:
Head locator unit, described in determining according to the corresponding personage's skeleton data of the object localization user call number
The head space position of object localization user;
Function call subelement calls the three-dimensional body-sensing to take the photograph for the head space position according to the object localization user
The head space position is substituted into second api function, obtains the 2nd API letter by the second api function that camera provides
Position of the head of the object localization user that number returns, described in the color image;
First eyes locator unit, for position of the head according to the object localization user in the color image,
Determine position of the eyes of the object localization user in the color image;
Second eyes locator unit, for position of the eyes according to the object localization user in the color image,
Determine the spatial position of the eyes of the object localization user.
14. device according to claim 13, which is characterized in that the first eyes locator unit is specifically used for:
According to position of the head of the object localization user in the color image, using described in the detection of face alignment algorithm
The position of the human face characteristic point of the object localization user in color image;
According to the position of the human face characteristic point detected, determine the eyes of the object localization user in the color image
Position.
15. device according to claim 12, which is characterized in that the three-dimensional body-sensing video camera includes that Kinect somatosensory is taken the photograph
Camera or Xtion body-sensing video camera.
16. device according to claim 11, which is characterized in that second locating module includes:
First determination unit, for the position and predetermined according to the locator markers in the positioning image
The position of the eyes of setting position and the object localization user of the locator markers with the object localization user
Corresponding relationship determines position of the eyes of the object localization user in the positioning image;
Second determination unit determines institute for position of the eyes according to the object localization user in the positioning image
State the spatial position of the eyes of object localization user.
17. 1 to 16 described in any item devices according to claim 1, which is characterized in that first locating module includes:
First positioning unit, for determining the locator markers in the positioning according to predetermined machine learning model
Position in image.
18. 1 to 16 described in any item devices according to claim 1, which is characterized in that
It include multiple regions, a kind of color of each area filling, and filling in arbitrary neighborhood region in the locator markers
Color it is different, the line of demarcation between arbitrary neighborhood region intersects at a point;
First locating module includes:
Intersecting point coordinate acquiring unit, for obtaining the intersection point in the line of demarcation in the locator markers in the positioning image
Coordinate;
Second positioning unit is determined as the locator markers and exists for the coordinate by the intersection point on the positioning image
Position in the positioning image.
19. device according to claim 18, which is characterized in that the intersecting point coordinate acquiring unit includes:
First image procossing subelement obtains each face in the color in the multiple region for being based on the positioning image
The prominent image in the corresponding region of color;
Second image procossing subelement, it is prominent for the corresponding region of each color in the color respectively to the multiple region
Image is filtered, and obtains the highlighted filtering image of intersection point corresponding with the prominent image in each region, described;
The value of third image procossing subelement, the pixel in each filtering image for will acquire on corresponding position be added or
It is multiplied, obtains the highlighted target filtering image of the intersection point;
Intersecting point coordinate determines subelement, filters for obtaining on the target filtering image the maximum pixel of value in the target
Coordinate on image, and determine it as seat of the intersection point in the line of demarcation in the locator markers on the positioning image
Mark.
20. device according to claim 19, which is characterized in that
The locator markers include with centrosymmetric at least four region of the intersection point in the line of demarcation, the region of arbitrary neighborhood
Filling is red and blue respectively;
The first image processing subelement is specifically used for:
The positioning image is transformed into YUV color space by rgb color space, obtains the first image and the channel V in the channel U
Second image, using first image in the channel U as the prominent image of blue region, using second image in the channel V as
Red area protrudes image;
Or
The positioning image is transformed into YUV color space by rgb color space, obtains the first image and the channel V in the channel U
Second image;
Threshold segmentation processing is carried out to first image in the channel U, pixel value in the first image is preset less than first
The pixel value of the pixel of threshold value is set as the first preset value, and it is default to be greater than or equal to described first for pixel value in the first image
The pixel value of the pixel of threshold value is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of blue region;
Threshold segmentation processing is carried out to second image in the channel V, pixel value in second image is preset less than second
The pixel value of the pixel of threshold value is set as the second preset value, and it is default to be greater than or equal to described second for pixel value in second image
The pixel value of the pixel of threshold value is constant, and the image obtained after Threshold segmentation is handled is as the prominent image of red area.
21. a kind of bore hole stereoscopic display equipment characterized by comprising
Processor and memory;
Memory, for storing the computer program that can be performed;
The processor calls the computer program in the memory to execute following steps:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one user in the positioning image
Image, include object localization user at least one described user, telltale mark be provided with the object localization user
Object;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the space of the eyes of the object localization user is determined
Position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that target positioning is used
The display content that family is watched is adapted with the spatial position of the eyes.
22. a kind of computer readable storage medium, which is characterized in that including computer program, the computer program can be located
Device is managed to execute to complete following steps:
Obtain the positioning image that predetermined image acquisition device provides, wherein include at least one user in the positioning image
Image, include object localization user at least one described user, telltale mark be provided with the object localization user
Object;
Determine position of the locator markers in the positioning image;
According to position of the locator markers in the positioning image, the space of the eyes of the object localization user is determined
Position;
According to the spatial position of the eyes of the object localization user, bore hole stereoscopic display is carried out, so that target positioning is used
The display content that family is watched is adapted with the spatial position of the eyes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420048.8A CN109961478A (en) | 2017-12-25 | 2017-12-25 | A kind of Nakedness-yet stereoscopic display method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420048.8A CN109961478A (en) | 2017-12-25 | 2017-12-25 | A kind of Nakedness-yet stereoscopic display method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109961478A true CN109961478A (en) | 2019-07-02 |
Family
ID=67020943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711420048.8A Withdrawn CN109961478A (en) | 2017-12-25 | 2017-12-25 | A kind of Nakedness-yet stereoscopic display method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109961478A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012129708A (en) * | 2010-12-14 | 2012-07-05 | Toshiba Corp | Stereoscopic image signal processing device and method |
CN103293692A (en) * | 2013-06-19 | 2013-09-11 | 青岛海信电器股份有限公司 | Naked eye three-dimensional image display control method and device |
CN103529947A (en) * | 2013-10-31 | 2014-01-22 | 京东方科技集团股份有限公司 | Display device and control method thereof and gesture recognition method |
CN103597518A (en) * | 2011-06-06 | 2014-02-19 | 微软公司 | Generation of avatar reflecting player appearance |
CN104331902A (en) * | 2014-10-11 | 2015-02-04 | 深圳超多维光电子有限公司 | Target tracking method, target tracking device, 3D display method and 3D display device |
CN204578692U (en) * | 2014-12-29 | 2015-08-19 | 深圳超多维光电子有限公司 | Three-dimensional display system |
CN105812772A (en) * | 2014-12-29 | 2016-07-27 | 深圳超多维光电子有限公司 | Stereo display system and method for medical images |
CN105898287A (en) * | 2016-05-05 | 2016-08-24 | 清华大学 | Device and method for machine visual analysis based on naked-eye stereoscopic display |
CN106599656A (en) * | 2016-11-28 | 2017-04-26 | 深圳超多维科技有限公司 | Display method, device and electronic equipment |
CN106709303A (en) * | 2016-11-18 | 2017-05-24 | 深圳超多维科技有限公司 | Display method and device and intelligent terminal |
-
2017
- 2017-12-25 CN CN201711420048.8A patent/CN109961478A/en not_active Withdrawn
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012129708A (en) * | 2010-12-14 | 2012-07-05 | Toshiba Corp | Stereoscopic image signal processing device and method |
CN103597518A (en) * | 2011-06-06 | 2014-02-19 | 微软公司 | Generation of avatar reflecting player appearance |
CN103293692A (en) * | 2013-06-19 | 2013-09-11 | 青岛海信电器股份有限公司 | Naked eye three-dimensional image display control method and device |
CN103529947A (en) * | 2013-10-31 | 2014-01-22 | 京东方科技集团股份有限公司 | Display device and control method thereof and gesture recognition method |
CN104331902A (en) * | 2014-10-11 | 2015-02-04 | 深圳超多维光电子有限公司 | Target tracking method, target tracking device, 3D display method and 3D display device |
CN204578692U (en) * | 2014-12-29 | 2015-08-19 | 深圳超多维光电子有限公司 | Three-dimensional display system |
CN105812772A (en) * | 2014-12-29 | 2016-07-27 | 深圳超多维光电子有限公司 | Stereo display system and method for medical images |
CN105898287A (en) * | 2016-05-05 | 2016-08-24 | 清华大学 | Device and method for machine visual analysis based on naked-eye stereoscopic display |
CN106709303A (en) * | 2016-11-18 | 2017-05-24 | 深圳超多维科技有限公司 | Display method and device and intelligent terminal |
CN106599656A (en) * | 2016-11-28 | 2017-04-26 | 深圳超多维科技有限公司 | Display method, device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102342982B1 (en) | Methods and related augmented reality methods for concealing objects in images or videos | |
CN105812778B (en) | Binocular AR wears display device and its method for information display | |
CN105469379B (en) | Video target area shielding method and device | |
CN103988504B (en) | The image processing equipment rendered for sub-pixel and method | |
CN110168562B (en) | Depth-based control method, depth-based control device and electronic device | |
CN105956576A (en) | Image beautifying method and device and mobile terminal | |
CN106355479A (en) | Virtual fitting method, virtual fitting glasses and virtual fitting system | |
CN106484116A (en) | The treating method and apparatus of media file | |
GB2333590A (en) | Detecting a face-like region | |
WO2021098486A1 (en) | Garment color recognition processing method, device, apparatus, and storage medium | |
CN109785228B (en) | Image processing method, image processing apparatus, storage medium, and server | |
AU2018293302B2 (en) | Method for filter selection | |
CN105809654A (en) | Target object tracking method and device, and stereo display equipment and method | |
CN104750933A (en) | Eyeglass trying on method and system based on Internet | |
EP3547672A1 (en) | Data processing method, device, and apparatus | |
CN110214339A (en) | For showing the method and apparatus with the image of the visual field changed | |
US10049599B2 (en) | System and method for assisting a colorblind user | |
CN104581127B (en) | Method, terminal and head-worn display equipment for automatically adjusting screen brightness | |
CN103034330A (en) | Eye interaction method and system for video conference | |
CN110662012A (en) | Naked eye 3D display effect optimization drawing arranging method and system and electronic equipment | |
CN109961477A (en) | A kind of space-location method, device and equipment | |
CN105979236A (en) | Image quality adjustment method and device | |
CN108398811A (en) | glasses, display server, display, display system and its working method | |
CN109961478A (en) | A kind of Nakedness-yet stereoscopic display method, device and equipment | |
CN108093243A (en) | A kind of three-dimensional imaging processing method, device and stereoscopic display device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190702 |