CN116433569A - Method for detecting illuminator on handle and virtual display device - Google Patents

Method for detecting illuminator on handle and virtual display device Download PDF

Info

Publication number
CN116433569A
CN116433569A CN202211149262.5A CN202211149262A CN116433569A CN 116433569 A CN116433569 A CN 116433569A CN 202211149262 A CN202211149262 A CN 202211149262A CN 116433569 A CN116433569 A CN 116433569A
Authority
CN
China
Prior art keywords
candidate
contour
binarization
contours
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211149262.5A
Other languages
Chinese (zh)
Inventor
郑贵桢
周祺晟
曾杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Shenzhen Co ltd
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN202211149262.5A priority Critical patent/CN116433569A/en
Publication of CN116433569A publication Critical patent/CN116433569A/en
Priority to PCT/CN2023/119844 priority patent/WO2024061238A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The application relates to the field of virtual display interaction, provides a method for detecting a light emitter on a handle and virtual display equipment, designs a dynamic binarization threshold adjustment technology based on ambient brightness, and can weight binarization thresholds calculated by different binarization methods according to the current ambient brightness determined by handle images acquired by a multi-camera on the virtual display equipment, so that interference of ambient illumination on detection results is reduced, and robustness and performance of an algorithm are improved; meanwhile, abnormal contours in candidate contour sets of a plurality of illuminators detected in the binarized handle image are removed according to priori contour shape information and contour comparison information, so that the illuminators on the handle can be detected rapidly and accurately, the relative pose between the handle and the virtual display device can be positioned rapidly and accurately, and user experience is improved. The whole detection process uses a simple image processing technology, and has the advantages of high speed and low memory resource occupancy rate, and is convenient to be deployed on portable wearable equipment.

Description

Method for detecting illuminator on handle and virtual display device
Technical Field
The application relates to the technical field of virtual reality interaction, and provides a method for detecting a light emitter on a handle and virtual display equipment.
Background
For Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and other Virtual display devices, conventional interactions are typically implemented using handles, just as control relationships between a personal computer (Personal Computer, PC) and a mouse.
With the development of handle technology, the technology is gradually upgraded from 3 degrees of freedom (DOF) to a 6DOF handle capable of realizing full-Degree-of-freedom interaction, so that the immersive experience of VR users and AR users is improved. Particularly in game scenarios, 6DOF handles are very widely used.
Currently, a mainstream 6DOF handle is to determine a 6DOF pose between the handle and a virtual display device by using a Computer Vision (CV) technology and a positioning technology of an inertial measurement unit (Inertial measurement unit, IMU), so that the handle can control a display screen of the virtual display device according to the 6DOF pose. In the positioning process, mainly used is the illuminator on the handle, wherein, illuminator accessible visual image on the handle detects, like this, if illuminator on the handle detects inaccurately, will lead to there is great error in the 6DOF appearance between handle and virtual display device, has reduced control accuracy, seriously influences user experience.
Therefore, improving the accuracy of the detection of the light emitter on the handle is a highly desirable problem.
Disclosure of Invention
The application provides a method for detecting a light emitter on a handle and virtual display equipment, which are used for improving the accuracy of detecting the light emitter on the handle, further improving the relative pose between the handle and the virtual display equipment and realizing accurate human-computer interaction.
In one aspect, the present application provides a method for detecting a light on a handle for controlling a screen displayed by a virtual display device, the method comprising:
acquiring an original handle image acquired by a multi-camera on the virtual display device, and carrying out gray processing on the original handle image to obtain a gray handle image;
determining current ambient brightness according to an original histogram of the gray handle image, and performing binarization processing on the gray handle image by adopting a target binarization threshold matched with the current ambient brightness to obtain a binarization handle image, wherein the target binarization threshold is obtained by weighting based on binarization thresholds solved by at least two binarization methods;
performing contour detection on the binarized handle image to obtain candidate contour sets of a plurality of initial light emitters;
Removing abnormal contours in the candidate contour set according to the prior contour shape information and contour comparison information respectively to obtain a target contour set;
and obtaining each target illuminator on the handle for determining the relative pose between the handle and the virtual display device according to each target contour in the target contour set.
In another aspect, the application provides a virtual display device, including a processor, a memory, a display screen, a communication interface, and a multi-view camera, where the virtual display device communicates with a handle through the communication interface, and the communication interface, the multi-view camera, the display screen, the memory, and the processor are connected through a bus;
the memory stores a computer program, and the processor performs the following operations according to the computer program:
acquiring an original handle image acquired by the multi-camera, and carrying out gray processing on the original handle image to obtain a gray handle image;
determining current ambient brightness according to an original histogram of the gray handle image, and performing binarization processing on the gray handle image by adopting a target binarization threshold matched with the current ambient brightness to obtain a binarization handle image, wherein the target binarization threshold is obtained by weighting based on binarization thresholds solved by at least two binarization methods;
Performing contour detection on the binarized handle image to obtain candidate contour sets of a plurality of initial light emitters;
removing abnormal contours in the candidate contour set according to the prior contour shape information and contour comparison information respectively to obtain a target contour set;
and obtaining each target illuminator on the handle for determining the relative pose between the handle and the virtual display device according to each target contour in the target contour set.
In another aspect, the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer device to perform the method of detecting an on-handle light emitter provided by the embodiments of the present application.
In the method for detecting the illuminator on the handle and the virtual display equipment, the gray level image of the handle image acquired by the multi-camera on the virtual display equipment is subjected to histogram analysis, the current environment brightness of the handle is determined, and the target binarization threshold matched with the current environment brightness is adopted to carry out binarization processing on the gray level handle image; further, abnormal contours in the candidate contour sets of the plurality of initial illuminators detected in the binarized handle image are removed according to the priori contour shape information and the contour comparison information, so that each target illuminator is detected rapidly and accurately according to the rest target contours, the relative pose between the handle and the virtual display device is positioned rapidly and accurately, and user experience is improved. The whole detection process uses a simple image processing technology, the detection speed is high, the occupation of memory resources is reduced, and the detection method is convenient to deploy on portable wearable equipment.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a VR device and a handle provided in an embodiment of the present application;
fig. 2A is a schematic diagram of a virtual display device including a multi-view camera according to an embodiment of the present application;
FIG. 2B is a schematic diagram of a 6DOF handle including a plurality of LED white light lamps according to an embodiment of the present application;
FIG. 2C is a schematic illustration of a 6DOF handle including a plurality of LED infrared lamps provided in an embodiment of the present application;
fig. 3A is a diagram of an effect of ambient light on LED lamp detection according to an embodiment of the present disclosure;
fig. 3B is a diagram of an effect of background light on LED lamp detection according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for detecting an illuminator on a handle according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for determining a target binarization threshold according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for determining each binarization threshold weight according to an embodiment of the present application;
FIG. 7 is a flowchart of a method for eliminating abnormal contours based on the distance between candidate contours according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for eliminating abnormal contours based on the size of the area between candidate contours according to an embodiment of the present application;
FIG. 9 is a flowchart of a method for eliminating abnormal contours based on outlier characteristics of candidate contours according to an embodiment of the present application;
FIG. 10 is a block diagram of a light emitter on a test handle according to an embodiment of the present application;
fig. 11 is a block diagram of a virtual display device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, which can be made by a person of ordinary skill in the art without any inventive effort, based on the embodiments described in the present application are intended to be within the scope of the technical solutions of the present application.
In order to clearly explain the embodiments of the present application, explanation is given below for terms in the embodiments of the present application.
Profile: the image is formed by the outermost pixels in the non-connected binary areas after being subjected to the binarization processing, and each non-connected binary area has only one outermost contour.
Contour area: refers to the sum of the areas of all the pixel points in the area surrounded by the outermost peripheral pixel points.
The following describes the design ideas of the embodiments of the present application.
Virtual display devices such as AR and VR generally refer to a head-mounted display device (abbreviated as a head display or a helmet, such as VR glasses and AR glasses) with an independent processor, and have functions of independent operation, input and output. The virtual display device can be externally connected with a handle, and a user controls a virtual picture displayed by the virtual display device through operating the handle, so that conventional interaction is realized.
Taking a game scene as an example, refer to fig. 1, which is a schematic diagram of an application scene of VR device and a handle provided in an embodiment of the present application. As shown in fig. 1, the virtual game screen of the VR device is put on the television by utilizing the large screen advantage of the television, so that the entertainment is higher. The player controls the game picture of the VR device through the handle and makes the reflection on the limbs according to the change of the game scene, thereby experiencing immersive experience like the body of the person, and improving the interest of the game.
In the game scene shown in fig. 1, in the interaction process, the relative pose of the handle and virtual display devices such as AR, VR and the like is calculated through CV and IMU positioning technology, so that three-dimensional interaction of the virtual display devices in a three-dimensional space is realized, and immersive experience is improved.
Generally, according to the difference of output pose, the handle that commonly uses includes 3DOF handle and 6DOF handle, and wherein, 3DOF output 3D's rotation gesture, and 6DOF handle output 3D's translation position and 3D's rotation gesture, and for 3DOF handle, the game action that 6DOF handle can make is more complicated, and the interest is stronger.
Currently, an IMU and a plurality of light emitters (such as LED lamps) are disposed on a commonly used 6DOF handle, wherein the plurality of light emitters can emit different types of light, and the types of the multi-camera (circled in fig. 2A) on the virtual display device should be adapted to the light emitting types.
For example, referring to fig. 2B, a schematic diagram of a 6DOF handle provided in an embodiment of the present application, as shown in fig. 2B, the LED lamps disposed on the 6DOF handle emit white light, and the white point hole is the position of each LED lamp. At this time, in order to detect the position of the LED lamp in the handle image, the multi-camera on the virtual display device should be an RGB camera.
For another example, referring to fig. 2C, a schematic diagram of another 6DOF handle according to an embodiment of the present application is shown in fig. 2C, where an LED lamp disposed on the 6DOF handle emits infrared light (not visible to human eyes). At this time, in order to detect the position of the LED lamp in the handle image, the multi-camera on the virtual display device should be an infrared camera.
With the development of computer vision, target detection, target tracking, hand gesture estimation, image processing and other technologies, the mainstream 6DOF handle positioning method is to determine the 6DOF pose between the handle and the virtual display device by utilizing CV combined with Inertial Measurement Unit (IMU) positioning technology, so that the control of the handle on the display picture of the virtual display device is realized according to the 6DOF pose. Specifically, handle images are acquired through a plurality of cameras on virtual display equipment such as AR, VR and the like, the handle images are detected to obtain the center point of each illuminator, the center point of each illuminator is matched with the 3D structure of the illuminator on the handle, after matching, the relative pose between the 6DOF handle and the virtual display equipment is calculated through the PNP principle of 3D visual geometry and IMU pre-integration, and 6DOF positioning is realized.
In the 6DOF positioning process, as the light emitter on the handle is detected through the visual image, if the light emitter on the handle is detected inaccurately, a larger error exists in the 6DOF pose between the handle and the virtual display device, the control precision is reduced, and the user experience is seriously affected. Thus, the detection of the light on the handle plays a significant role in the 6DOF positioning.
However, most of the current algorithms for detecting the light emitters on the handle are LED detection algorithms based on artificial intelligence (Artificial Intelligence, AI) neural networks, and mainly face the following problems:
1. in practical application, the real world environment where the user is located is complex, so that the background environment has a plurality of devices very similar to the light emitter, and the difficulty of eliminating the similar devices is high, which can interfere with the accuracy and stability of the detection of the light emitter on the handle.
For example, in the application scenario shown in fig. 3A (a), some lamps in galleries and rooms in three-dimensional space are included in addition to the light emitters on the handle. In addition to detecting the light on the handle, other interfering devices may be erroneously detected at the time of light detection, as shown in fig. 3 (b).
2. When the illuminator on the handle generates infrared light, the infrared camera on the virtual display device is influenced by an ambient light source, so that the difficulty of correct detection of the illuminator on the handle can be increased, and subsequent abnormal positioning can be caused by abnormal detection of the illuminator.
For example, in the application scenario shown in fig. 3B (a), the background on which the handle is located includes an LED display screen, and the LED display screen displays text. Upon detection by the light emitter, the LED display screen may interfere with detection of the LED infrared light on the handle, as shown in (B) of fig. 3B.
3. In order to ensure smooth immersive experience, the virtual display device needs to deploy an algorithm with strong real-time performance, and the visual positioning part is relatively difficult to optimize, low in detection efficiency and difficult to deploy.
In view of this, the embodiment of the application provides a method for detecting a light emitter on a handle and a virtual display device, which can accurately detect the light emitter on the handle by performing a plurality of columns of image processing operations on original handle images acquired by a plurality of cameras on the virtual display device, thereby greatly reducing development difficulty and cost; meanwhile, in order to improve the detection speed of the illuminator on the handle, the outline detected by the image processing technology is removed abnormally, so that the operation speed is improved, the occupation of memory resources is reduced, and the portable wearable equipment is facilitated to be deployed; furthermore, in order to further adapt to the use scenes of different environmental illumination, the algorithm can stably and robustly run in a complex environment, a dynamic binarization threshold adjustment technology based on the environmental brightness is provided, and the self-adaptive binarization threshold can be dynamically set according to the current environmental brightness, so that the robustness and performance of the algorithm are further improved.
Compared with the method for detecting the LEDs based on the AI neural network, the method for detecting the LEDs based on the AI neural network does not need a high-configuration processor to carry out network training, does not need to carry out marking of a large amount of data, and reduces the requirement for developing hardware resources and the cost and workload of development. Compared with an LED detection method for general image processing, the method and the device can adaptively adjust the binarization threshold according to the current environment brightness, and the threshold for binarization processing is obtained through weighting the threshold of at least two binarization methods, so that the robustness of an algorithm in complex scenes is greatly improved, and the application range is enlarged; meanwhile, according to the embodiment of the application, according to the outline characteristics of the illuminator, the illuminating equipment which interferes with the positioning of the handle is eliminated, and the performance of the algorithm and the detection accuracy are further improved.
Referring to fig. 4, a flow of a method for detecting a light emitter on a handle according to an embodiment of the present application mainly includes the following steps:
s401: and acquiring an original handle image acquired by the multi-camera on the virtual display device, and carrying out gray processing on the original handle image to obtain a gray handle image.
In S401, when the type of the multi-camera on the virtual display device is an RGB camera, the collected original handle image is an RGB image, and when the type of the multi-camera on the virtual display device is an infrared camera, the collected original handle image is an infrared image, both the RGB image and the infrared image contain illumination characteristics of a plurality of light emitters on the handle, and detection of the light emitters can be performed.
In S401, the manner of graying the original handle image is not limited in this embodiment, and conventional graying methods may be used, including but not limited to floating point method, integer method, shift method, average method, and the like.
S402: according to the original histogram of the gray handle image, determining the current ambient brightness, and carrying out binarization processing on the gray handle image by adopting a target binarization threshold matched with the current ambient brightness to obtain a binarized handle image.
In S402, the brightness level of the current environment may be determined by analyzing the original histogram of the gray-scale image. Specifically, when the peak of the original histogram is located at the dark side with the gray value smaller than 100, it indicates that no bright illumination exists in the current environment, and at this time, the current environment brightness is determined to be dark; when the peak of the original histogram is located on the bright side with the gray value being greater than or equal to 100, it indicates that there is bright illumination in the current environment, and at this time, the current environment brightness is determined to be bright.
Further, after the current ambient brightness is determined, a target binarization threshold matched with the current ambient brightness is adopted to carry out binarization processing on the gray handle image. The method suitable for binarizing the gray handle image containing a plurality of light emitters mainly comprises the following two steps:
maximum inter-class variance method: the method takes the maximized inter-class variance of the foreground image and the background image as a core thought, and is suitable for solving the binary threshold value of which the histogram distribution approaches to double peaks;
trigonometry: the method is characterized in that a straight line is constructed through the histogram from the highest peak to the far side of the histogram, then the vertical distance between each histogram and the straight line is solved, and the position of the histogram corresponding to the maximum vertical distance is taken as the binarization threshold.
Because the virtual game experience scene is complex, the ambient brightness difference is large, and no matter whether one of the two methods is adopted alone, the ideal binarization effect can not be obtained. In order to adapt to a wider use scenario, in S402, based on the two main binarization adaptive threshold solving algorithms, the actual distribution of the original histogram of the gray handle image used by the detection illuminator is improved, and the oxford method and the trigonometry are combined, so that a target binarization threshold used for more reasonably binarizing the gray handle image can be obtained by optimizing the method and the trigonometry, and the method can be adapted to both bright and dim environments.
In specific implementation, the determining method of the target binarization threshold refers to a flow shown in fig. 5, and mainly includes the following steps:
s4021: and eliminating pixel points with gray values lower than a preset gray threshold in the gray handle image, and respectively determining respective binarization thresholds of at least two binarization methods according to the new histogram of the gray handle image after the pixel points are eliminated.
Since the brightness of the light emitter on the handle is substantially stable under different environments, dim background with too low brightness should be excluded when calculating the binarization threshold by the binarization method. Therefore, in S4021, pixels in the gray handle image having a gray value lower than the preset gray threshold are removed, a new histogram of the current image is calculated according to the remaining pixels in the gray handle image, and the respective binarization thresholds of the at least two binarization methods are determined according to the new histogram.
Optionally, because the environment where the handle is located is complex and various, in order to prevent accidents, a minimum guarantee threshold value can be set in advance for each binarization method. Specifically, when the calculated binarization threshold value according to the new histogram is lower than the preset minimum guarantee threshold value, the calculated binarization threshold value is forcedly set to be the preset minimum guarantee threshold value, so that the stability of the algorithm under special conditions is enhanced.
For example, according to the new histogram, when the binarization threshold calculated by the oxford method is lower than a preset minimum guarantee threshold, setting the preset minimum guarantee threshold as the binarization threshold corresponding to the oxford method; when the binarization threshold calculated by the trigonometry is lower than a preset minimum guarantee threshold, the preset minimum guarantee threshold is set as the binarization threshold corresponding to the trigonometry.
In S4021, the binarization threshold value of the other binarization method may be determined in addition to the binarization threshold values of the oxford method and the trigonometric method determined from the new histogram.
S4022: and comparing the current ambient brightness with a preset brightness threshold.
In S4022, by comparing the current ambient brightness with the preset brightness threshold, a reasonable target binarization threshold for binarizing the handle gray-scale image can be determined, so that interference of ambient light is reduced, and accuracy of detecting the light emitter is improved.
S4023: and respectively determining weights corresponding to the at least two binarization thresholds according to the comparison result.
In S4023, according to the comparison result between the current ambient brightness and the preset brightness threshold, the adaptation degree between the current ambient brightness and the binarization threshold solved by each binarization method may be determined, where the adaptation degree may be reflected by a weight.
The following takes the process of weighting the binarization threshold values solved by two binarization methods to obtain a target binarization threshold value as an example, referring to fig. 6, the determination mode of the weight mainly includes the following steps:
s4023_1: and determining whether the current ambient brightness is greater than a preset brightness threshold, if so, executing S4023_2, otherwise, executing S4023_3.
S4023_2: setting a first weight corresponding to a first binarization threshold calculated by a first binarization method and being larger than a second weight corresponding to a second binarization threshold calculated by a second binarization method.
In S4023_2, when the current ambient brightness is greater than the preset brightness threshold, it indicates that the handle is in a bright environment, and at this time, the first binarization threshold calculated by the first binarization method is more adaptive to the current ambient brightness, i.e., the first binarization threshold calculated by the first binarization method is more accurate, so that the first weight corresponding to the first binarization threshold is set to be greater than the second weight corresponding to the second binarization threshold calculated by the second binarization method.
S4023_3: setting a first weight corresponding to a first binarization threshold calculated by a first binarization method, and setting a second weight corresponding to a second binarization threshold calculated by a second binarization method.
In S4023_3, when the current ambient brightness is less than or equal to the preset brightness threshold, it indicates that the handle is in a dim environment, and at this time, the second binarization threshold calculated by the second binarization method is more adaptive to the current ambient brightness, i.e., the second binarization threshold calculated by the second binarization method is more accurate, so that the first weight corresponding to the first binarization threshold is set to be smaller than the second weight corresponding to the second binarization threshold calculated by the second binarization method.
Optionally, the first binarization method in S4023_2-S4023_3 is a triangle method, and the second binarization method is an Ojin method.
S4024: and weighting according to each binarization threshold and the corresponding weight to obtain the target binarization threshold.
Taking the first binarization method as a trigonometry method and the second binarization method as an oxford method as an example, assuming that the first binarization threshold is marked as S1, the corresponding first weight is alpha, the second binarization threshold is marked as S2, and the corresponding second weight is beta, at this time, the calculation formula of the target binarization threshold S is as follows:
S=α×s1+β×s2 equation 1
Optionally, when the current ambient brightness is greater than the preset brightness threshold, α=0.7, β=0.3; when the current ambient brightness is greater than the preset brightness threshold, α=0.3, β=0.7.
And after the target binarization threshold value matched with the current environment brightness is obtained, carrying out binarization processing on the gray handle image according to the target binarization threshold value to obtain a binarization handle image.
S403: and performing contour detection on the binarized handle image to obtain candidate contour sets of a plurality of initial light emitters.
Since other light emitting devices in the surrounding environment can emit light in addition to the plurality of light emitting devices on the handle, in S403, the candidate contour set for contour detection may include the contour of the light emitting device or the contour of other light emitting devices interfering with the light emitting device, and thus the candidate contour set needs to be screened.
S404: and removing abnormal contours in the candidate contour set according to the prior contour shape information and the contour comparison information respectively to obtain a target contour set.
In S404, for each candidate contour in the set of candidate contours, performing at least one culling operation from the prior contour shape information:
And a first rejection operation, namely rejecting the candidate contour with the aspect ratio exceeding a first preset proportion threshold according to the aspect ratio relation between the area of the candidate contour and the circumscribed rectangle of the candidate contour, wherein the first preset proportion threshold and the area of the candidate contour.
As the area of the candidate contour expands, the length and width of the circumscribed rectangle of the candidate contour are required to be more similar. Therefore, in the first rejection operation, in order to improve the accuracy of contour detection, in the embodiment of the present application, the abnormal contour rejection is performed by using a stepped proportional threshold, that is, the area of the first preset proportional threshold and the area of the candidate contour are in a stepped state, and the larger the area of the candidate contour, the smaller the first preset proportional threshold. And when the aspect ratio of the circumscribed rectangle of the candidate contour exceeds a first preset proportion threshold, the candidate contour is considered as false detection, and the candidate contour is removed.
And a second rejection operation is performed, wherein the area ratio of the candidate contour to the circumscribed rectangle of the candidate contour is smaller than a preset ratio threshold value.
And thirdly, calculating the distances between the gray scale centroid point of the candidate contour and the center point of the circumscribed rectangle of the candidate contour on the horizontal axis and the vertical axis respectively, calculating the proportion of each distance to the side length of the candidate contour respectively, and eliminating the candidate contour if at least one of the two proportions exceeds a second preset proportion threshold value.
And fourthly, determining the roundness of the candidate contour according to the total number of pixel points contained in the candidate contour and the side length of the candidate contour, and if the roundness is lower than a preset roundness threshold value, eliminating the candidate contour.
Assuming that the total number of pixels included in the candidate contour (including pixels inside the candidate contour and pixels on the contour boundary) is P, and the perimeter of the candidate contour is C, the calculation formula of the roundness R is:
R=(4*π*P)/C 2 equation 2
And fifthly, computing the brightness average value of the candidate contour, and if the brightness average value is smaller than a preset brightness threshold value, eliminating the candidate contour.
And eliminating operation six, determining the brightness average value of the preset peripheral area of the circumscribed rectangle of the candidate area and the brightness average value of the candidate contour, and eliminating the candidate contour if the brightness difference between the two brightness average values is smaller than the preset brightness difference value.
When the abnormal contours in the candidate contour set are eliminated according to the prior contour shape information, the abnormal contours are eliminated aiming at a single candidate contour, and the relation among the candidate contours is not considered. Therefore, in S404, the abnormal contours in the candidate contour set are further removed according to the contour comparison information.
Specifically, in S404, the manner of eliminating the abnormal contours in the candidate contour set according to the contour comparison information includes one or more of the following:
And a eliminating operation seven, respectively determining Euclidean distance between circumscribed rectangle center points of the two candidate contours and minimum Manhattan distance of edges of the two candidate contours aiming at each two candidate contours in the candidate contour set, and eliminating abnormal contours according to the Euclidean distance and the minimum Manhattan distance.
The specific process of eliminating abnormal contours according to the euclidean distance and the minimum manhattan distance between every two candidate contours is shown in fig. 7, and mainly includes the following steps:
s404_11: determining whether at least one of the Euclidean distance and the minimum Manhattan distance between the two candidate contours is less than a preset distance threshold, if so, executing S404_12, otherwise, executing S404_16.
In s404—11, the approximation degree of the two candidate contours can be determined based on the euclidean distance and the minimum manhattan distance between the two candidate contours. When at least one of the Euclidean distance and the minimum Manhattan distance between the two candidate contours is smaller than a preset distance threshold, the two candidate contours are indicated to have higher approximation degree, the exception judgment is needed to be further carried out, and S404_12 is executed; when the euclidean distance and the minimum manhattan distance between the two candidate contours are both greater than the preset distance threshold, it is indicated that the approximation degree of the two candidate contours is lower, and s404_16 should be executed.
S404_12: the areas of the two candidate contours are calculated separately.
S404_13: and determining whether the areas of the two candidate contours are smaller than a preset area threshold, if so, executing S404-14, otherwise, executing S404-15.
In s404_13, the anomaly determination is further performed by comparing the calculated areas of the two candidate contours with a preset area threshold.
S404_14: and simultaneously eliminating two candidate contours.
In s404_14, when the areas of the two candidate contours are smaller than the preset area threshold, it is indicated that the two candidate contours may be noise points, and the two candidate contours should be removed at the same time.
S404_15: and respectively calculating the brightness average value of the two candidate contours, and eliminating one candidate contour corresponding to the small brightness average value.
In s404—15, when at least one of the areas of the two candidate contours is not smaller than the preset area threshold, abnormal elimination may be performed by the luminance average. Specifically, the brightness average values of the two candidate contours are calculated respectively, the sizes of the two brightness average values are compared, and one candidate contour corresponding to the small brightness average value is removed from the candidate contour set.
S404_16: while retaining two candidate contours.
In s404—16, when the euclidean distance and the minimum manhattan distance between two candidate contours are both greater than the preset distance threshold, it is indicated that the approximation degree of the two candidate contours is low, and the two candidate contours in the candidate contour set can be simultaneously retained.
And eighth, removing the abnormal contour according to the number relation between the candidate contour with the largest area and the pixel points in the candidate contour with the second largest area.
After the candidate contours are sorted by area, the candidate contour with the largest area and the candidate contour with the next largest area in the candidate contour set can be selected, and the abnormal contour is eliminated according to the number relation between the pixel points in the two selected candidate contours, referring to fig. 8, mainly comprising the following steps:
s404_21: and if the number of the pixel points in the largest-area candidate contour and the next largest-area candidate contour exceeds the threshold value of the number of the preset pixel points, executing S404-22, otherwise, executing S404-25.
The number of pixels in the two candidate contours may reflect the approximation degree of the two candidate contours, so in S404_21, it may be determined whether the two candidate contours are similar in shape according to the comparison between the number of pixels in the largest-area candidate contour and the second largest-area candidate contour and the threshold value of the number of preset pixels, respectively.
S404_22: and calculating the multiple between the largest candidate contour and the number of pixel points in the next largest candidate contour.
S404_23: and determining whether the multiple is greater than a preset multiple threshold, if so, executing S404_24, otherwise, executing S404_25.
In s404—23, abnormality determination is further performed by a multiple between the largest-area candidate contour and the number of pixels in the next largest-area candidate contour.
S404_24: and eliminating the candidate contour with the largest area.
In s404—24, when the multiple between the largest candidate contour and the number of pixels in the next largest candidate contour is greater than the preset multiple threshold, the largest candidate contour may be an interfering object similar to the shape of the light emitter on the handle, and should be removed from the candidate contour set.
S404_25: the largest area candidate contour and the next largest area candidate contour are retained.
In s404_25, when one of the numbers of pixels in the area maximum candidate contour and the area sub-maximum candidate contour does not exceed the preset threshold number of pixels, or the multiple between the number of pixels in the area maximum candidate contour and the number of pixels in the area sub-maximum candidate contour is not greater than the preset multiple threshold, the area maximum candidate contour and the area sub-maximum candidate contour are reserved.
And eliminating operation nine, calculating the distance between the candidate contour and the nearest neighbor candidate contour according to each candidate contour in the candidate contour set, and eliminating outlier abnormal contours according to the distance.
The process of eliminating outlier abnormal contours according to the distance between the candidate contours and the nearest candidate contours is shown in fig. 9, and mainly includes the following steps:
s404_31: and determining the self-adaptive outlier distance according to the side lengths of the candidate contours and the side length median of all the candidate contours.
In s404_31, all candidate contours in the candidate contour set are ordered according to the side lengths of the candidate contours, a side length median is obtained, and the distance between the side length median and the current candidate contour is used as the self-adaptive outlier distance.
S404_32: and determining whether the distance between the candidate contour and the nearest neighbor candidate contour is larger than the self-adaptive outlier distance, if so, executing S404_33, otherwise, executing S404_36.
S404_33: and determining whether the number of all candidate contours is greater than a preset number threshold, if so, executing S404-34, otherwise, executing S404-35.
S404_34: and eliminating candidate contours.
When the distance between the candidate contour and the nearest neighbor candidate contour is larger than the self-adaptive outlier distance and the number of all the candidate contours is larger than a preset number threshold, the candidate contour is an abnormal outlier contour and should be removed.
S404_35: the candidate contours are retained.
S404_36: and (5) finishing outlier rejection.
When the number of all candidate contours is small, it may not be representative of a group, and at this time, abnormal contours may not be eliminated by outlier rejection, and abnormal rejection may be performed by other means.
And eliminating operation ten, calculating the brightness average value of each candidate contour in the candidate contour set, and eliminating abnormal contours according to each brightness average value.
In the rejection operation ten, the brightness average value of each candidate contour in the candidate contour set is ordered from large to small, the first N (N is an integer greater than or equal to 1) candidate contours are reserved, and the rest candidate contours are rejected.
It should be noted that, in the above modes from the first to the tenth of the rejection operations, no strict execution sequence is required, and the abnormal contour rejection can be performed according to the prior contour shape information first, and then the abnormal contour rejection can be performed according to the contour comparison information; or firstly, carrying out abnormal contour elimination according to contour comparison information, and then carrying out abnormal contour elimination according to priori contour shape information; the two abnormal eliminating modes of contour comparison information and priori contour shape information can be alternatively carried out.
S405: and obtaining each target illuminator on the handle for determining the relative pose between the handle and the virtual display device according to each target contour in the target contour set.
In S405, after eliminating the abnormal contours in the candidate contour set, a target contour set is obtained, and the center of each target contour in the target contour set is matched with the 3D light emitter structure of the handle generated in factory, so as to accurately obtain a plurality of target light emitters on the handle, so that according to the plurality of target light emitters detected by the binocular camera, a PNP algorithm is adopted to align the coordinate system between the handle and the virtual display device, and the data collected by the IMU on the handle after alignment is pre-scored, so as to obtain the relative 6DOF pose between the handle and the virtual display device, and realize the control of the handle on the display picture of the virtual display device.
Referring to fig. 10, an overall frame diagram of a light emitter on a detection handle according to an embodiment of the present application mainly includes four parts including binarization processing, contour detection, abnormal contour elimination based on prior contour shape information, and abnormal contour elimination based on contour contrast information. Wherein:
the binarization processing part mainly comprises the following contents: (1) carrying out gray processing on an original handle image acquired by a multi-view camera to obtain a gray handle image; (2) determining the current ambient brightness according to the gray handle image; (3) when the current ambient brightness is bright, setting the weight of the binary threshold calculated by the trigonometry to be larger than the weight of the binary threshold calculated by the Ojin method, and when the current ambient brightness is dim, setting the weight of the binary threshold calculated by the trigonometry to be smaller than the weight of the binary threshold calculated by the Ojin method; (4) and weighting the binarization threshold calculated by the trigonometry and the binarization threshold calculated by the Ojin method to obtain a target binarization threshold, and carrying out binarization processing on the gray handle image according to the target binarization threshold.
In the contour detecting section, (5) a candidate contour set is obtained by contour detecting the binarized handle image.
The abnormal contour eliminating part based on the prior contour shape information mainly comprises the following eliminating operations: (6) removing abnormal contours based on the aspect ratio of the circumscribed rectangle of the candidate contours; (7) removing the abnormal contour based on the area occupation ratio of the candidate contour and the circumscribed rectangle of the candidate contour; (8) removing abnormal contours based on gray centroid shifts of candidate contours; (9) removing abnormal contours based on the roundness of the candidate contours; removing abnormal contours based on brightness averages of candidate contours;
Figure BDA0003855621070000121
and eliminating the abnormal contour based on the brightness average value inside and outside the candidate contour.
The abnormal contour eliminating part based on the contour comparison information mainly comprises the following eliminating operations:
Figure BDA0003855621070000122
removing abnormal contours based on the distances between candidate contours; />
Figure BDA0003855621070000123
Removing abnormal contours based on the area of the candidate contours; />
Figure BDA0003855621070000124
Removing abnormal contours based on the brightness average value of each candidate contour; />
Figure BDA0003855621070000125
Outlier characteristics based on candidate contoursAnd eliminating the abnormal outline.
According to the method for detecting the illuminator on the hand, provided by the embodiment of the application, the illuminator on the handle can be detected rapidly and accurately by performing image processing operations on a plurality of columns of original handle images acquired by the multi-camera on the virtual display device, so that development difficulty and cost are greatly reduced. The method comprises the steps of carrying out histogram analysis on a gray level graph of a handle image acquired by a multi-camera on a virtual display device, determining the current ambient brightness of the handle, determining the weights of binarization thresholds calculated by different binarization methods according to the current ambient brightness, carrying out binarization processing on the gray level handle image through a target binarization threshold which is obtained by weighting and is suitable for the current ambient brightness, and dynamically adjusting the adaptive binarization threshold through the current ambient brightness, so that the robustness and the performance of an algorithm are improved, and because the target binarization threshold is obtained by weighting based on the binarization thresholds in at least two binarization methods, the interference of ambient illumination on a detection result can be reduced, and the application range is enlarged; further, abnormal contours in the candidate contour sets of the plurality of initial illuminators detected in the binarized handle image are removed according to the priori contour shape information and the contour comparison information, so that each target illuminator is detected rapidly and accurately according to the rest target contours, the relative pose between the handle and the virtual display device is positioned rapidly and accurately, and user experience is improved. The whole detection process uses a simple image processing process, has high detection speed, reduces the occupation of memory resources, and is convenient to be deployed on portable wearable equipment.
Based on the same technical concept, the embodiments of the present application provide a virtual display device, which can execute the above method for detecting the light emitter on the handle, and can achieve the same technical effects.
Referring to fig. 11, the virtual display device includes a processor 1101, a memory 1102, a display 1103, a communication interface 1104 and a multi-camera 1105, said virtual display device communicates with the handle via said communication interface 1104, said multi-camera 1105, said display 1103, said memory 1102 and said processor 1101 are connected via a bus 1106;
the memory 1102 stores a computer program, and the processor 1101 performs the following operations according to the computer program:
acquiring an original handle image acquired by the multi-camera 1105, and performing gray processing on the original handle image to obtain a gray handle image;
determining current ambient brightness according to an original histogram of the gray handle image, and performing binarization processing on the gray handle image by adopting a target binarization threshold matched with the current ambient brightness to obtain a binarization handle image, wherein the target binarization threshold is obtained by weighting based on binarization thresholds solved by at least two binarization methods;
Performing contour detection on the binarized handle image to obtain candidate contour sets of a plurality of initial light emitters;
removing abnormal contours in the candidate contour set according to the prior contour shape information and contour comparison information respectively to obtain a target contour set;
and obtaining each target illuminator on the handle for determining the relative pose between the handle and the virtual display device according to each target contour in the target contour set.
Optionally, the processor 1101 determines a target binarization threshold value that matches the current ambient brightness by:
removing pixel points with gray values lower than a preset gray threshold value in the gray handle image, and respectively determining respective binarization thresholds of the at least two binarization methods according to new histograms of the gray handle image after the pixel points are removed;
comparing the current ambient brightness with a preset brightness threshold;
respectively determining weights corresponding to the at least two binarization thresholds according to the comparison result;
and weighting according to each binarization threshold and the corresponding weight to obtain the target binarization threshold.
Optionally, the processor 1101 determines weights corresponding to the binarization thresholds in the at least two binarization methods according to the comparison result, which specifically includes:
When the current ambient brightness is larger than a preset brightness threshold, setting a first weight corresponding to a first binarization threshold calculated by a first binarization method and a second weight corresponding to a second binarization threshold calculated by a second binarization method;
when the current ambient brightness is smaller than or equal to the preset brightness threshold, setting a first weight corresponding to a first binarization threshold calculated by a first binarization method and a second weight corresponding to a second binarization threshold calculated by a second binarization method.
Optionally, when the binarization threshold determined according to the new histogram is lower than a preset minimum guarantee threshold, the processor 1101 sets the binarization threshold as the preset minimum guarantee threshold.
Optionally, the manner of eliminating the abnormal contours in the candidate contour set by the processor 1101 according to contour comparison information includes one or more of the following:
for each two candidate contours in the candidate contour set, determining Euclidean distance between circumscribed rectangular center points of the two candidate contours and minimum Manhattan distance of edges of the two candidate contours respectively, and eliminating abnormal contours according to the Euclidean distance and the minimum Manhattan distance;
Ranking all candidate contours in the candidate contour set according to the areas of the candidate contours, and eliminating abnormal contours according to the number relation between the candidate contour with the largest area and the pixel points in the candidate contour with the next largest area;
for each candidate contour in the candidate contour set, calculating the distance between the candidate contour and the nearest candidate contour, and removing outlier abnormal contours according to the distance;
and calculating the brightness average value of each candidate contour in the candidate contour set, and removing the abnormal contour according to each brightness average value.
Optionally, the processor 1101 eliminates the abnormal contour according to the euclidean distance and the minimum manhattan distance, which specifically includes:
when at least one of the Euclidean distance and the minimum Manhattan distance is smaller than a preset distance threshold, respectively calculating the areas of the two candidate contours;
if the areas of the two candidate contours are smaller than the preset area threshold value, eliminating the two candidate contours at the same time;
if at least one of the areas of the two candidate contours is not smaller than the preset area threshold, respectively calculating the brightness average value of the two candidate contours, and eliminating one candidate contour corresponding to the small brightness average value.
Optionally, the removing the abnormal contour by the processor 1101 according to the number relationship between the pixel points in the area maximum candidate contour and the area next largest candidate contour specifically includes:
if the number of the pixel points in the largest area candidate contour and the second largest area candidate contour exceeds a preset pixel point number threshold, calculating the multiple between the number of the pixel points in the largest area candidate contour and the second largest area candidate contour;
and if the multiple is larger than a preset multiple threshold, eliminating the candidate contour with the largest area.
Optionally, the processor 1101 eliminates outlier contours according to the distance, specifically including:
determining a self-adaptive outlier distance according to the side length of the candidate contour and the side length median of all the candidate contours;
and if the number of all the candidate contours is larger than a preset number threshold and the distance is larger than the self-adaptive outlier distance, eliminating the candidate contours.
Optionally, the manner in which the processor 1101 rejects the abnormal contours in the candidate contour set based on a priori contour shape information includes one or more of:
removing candidate contours with the aspect ratio exceeding a first preset proportion threshold according to the aspect ratio relation between the area of the candidate contours and the circumscribed rectangle of the candidate contours;
Rejecting candidate contours of which the area ratio of the candidate contours to the circumscribed rectangle of the candidate contours is smaller than a preset ratio threshold;
calculating the distances between the gray scale centroid point of the candidate contour and the center point of the circumscribed rectangle of the candidate contour on the horizontal axis and the vertical axis respectively, and calculating the proportion of each distance to the side length of the candidate contour respectively, and if at least one of the two proportions exceeds a second preset proportion threshold value, rejecting the candidate contour;
determining the roundness of the candidate contour according to the total number of pixel points contained in the candidate contour and the side length of the candidate contour, and eliminating the candidate contour if the roundness is lower than a preset roundness threshold;
calculating the brightness average value of the candidate contour, and eliminating the candidate contour if the brightness average value is smaller than a preset brightness threshold value;
and determining the brightness average value of a preset peripheral area of the circumscribed rectangle of the candidate area and the brightness average value of the candidate contour, and eliminating the candidate contour if the brightness difference between the two brightness average values is smaller than a preset difference value.
It should be noted that fig. 11 is only an example, and the hardware necessary for implementing the method steps for detecting the light emitter on the handle provided in the embodiment of the present application by the virtual display device is given. Not shown, the virtual display device also includes conventional hardware such as speakers, headphones, lenses, power interfaces, and the like.
The processor referred to in fig. 11 of the present embodiment may be a central processing unit (Central Processing Unit, CPU), a general purpose processor, a graphics processor (Graphics Processing Unit, GPU) digital signal processor (Digital Signal Processor, DSP), an Application-specific integrated circuit (Application-specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof.
Embodiments of the present application also provide a computer readable storage medium storing instructions that, when executed, perform the method of detecting a light emitter on a handle of the foregoing embodiments.
Embodiments of the present application also provide a computer program product for storing a computer program for executing the method for detecting a light emitter on a handle in the foregoing embodiments.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A method of detecting a light on a handle for controlling a display of a virtual display device, the method comprising:
acquiring an original handle image acquired by a multi-camera on the virtual display device, and carrying out gray processing on the original handle image to obtain a gray handle image;
determining current ambient brightness according to an original histogram of the gray handle image, and performing binarization processing on the gray handle image by adopting a target binarization threshold matched with the current ambient brightness to obtain a binarization handle image, wherein the target binarization threshold is obtained by weighting based on binarization thresholds solved by at least two binarization methods;
Performing contour detection on the binarized handle image to obtain candidate contour sets of a plurality of initial light emitters;
removing abnormal contours in the candidate contour set according to the prior contour shape information and contour comparison information respectively to obtain a target contour set;
and obtaining each target illuminator on the handle for determining the relative pose between the handle and the virtual display device according to each target contour in the target contour set.
2. The method of claim 1, wherein the target binarization threshold value that matches the current ambient brightness is determined by:
removing pixel points with gray values lower than a preset gray threshold value in the gray handle image, and respectively determining respective binarization thresholds of the at least two binarization methods according to new histograms of the gray handle image after the pixel points are removed;
comparing the current ambient brightness with a preset brightness threshold;
respectively determining weights corresponding to the at least two binarization thresholds according to the comparison result;
and weighting according to each binarization threshold and the corresponding weight to obtain the target binarization threshold.
3. The method of claim 2, wherein determining weights for the respective binarization thresholds in the at least two binarization methods based on the comparison results, respectively, comprises:
When the current ambient brightness is larger than a preset brightness threshold, setting a first weight corresponding to a first binarization threshold calculated by a first binarization method and a second weight corresponding to a second binarization threshold calculated by a second binarization method;
when the current ambient brightness is smaller than or equal to the preset brightness threshold, setting a first weight corresponding to a first binarization threshold calculated by a first binarization method and a second weight corresponding to a second binarization threshold calculated by a second binarization method.
4. The method of claim 2, wherein the binarization threshold is set to a preset minimum assurance threshold when the binarization threshold determined from the new histogram is below the preset minimum assurance threshold.
5. The method of claim 1, wherein the manner of culling the abnormal contours in the candidate contour set based on contour comparison information comprises one or more of:
for each two candidate contours in the candidate contour set, determining Euclidean distance between circumscribed rectangular center points of the two candidate contours and minimum Manhattan distance of edges of the two candidate contours respectively, and eliminating abnormal contours according to the Euclidean distance and the minimum Manhattan distance;
Ranking all candidate contours in the candidate contour set according to the areas of the candidate contours, and eliminating abnormal contours according to the number relation between the candidate contour with the largest area and the pixel points in the candidate contour with the next largest area;
for each candidate contour in the candidate contour set, calculating the distance between the candidate contour and the nearest candidate contour, and removing outlier abnormal contours according to the distance;
and calculating the brightness average value of each candidate contour in the candidate contour set, and removing the abnormal contour according to each brightness average value.
6. The method of claim 5, wherein the culling the outlier profile based on the euclidean distance and the minimum manhattan distance comprises:
when at least one of the Euclidean distance and the minimum Manhattan distance is smaller than a preset distance threshold, respectively calculating the areas of the two candidate contours;
if the areas of the two candidate contours are smaller than the preset area threshold value, eliminating the two candidate contours at the same time;
if at least one of the areas of the two candidate contours is not smaller than the preset area threshold, respectively calculating the brightness average value of the two candidate contours, and eliminating one candidate contour corresponding to the small brightness average value.
7. The method of claim 5, wherein the culling the abnormal contour based on a number relationship between pixels in the largest-area candidate contour and the next-largest-area candidate contour comprises:
if the number of the pixel points in the largest area candidate contour and the second largest area candidate contour exceeds a preset pixel point number threshold, calculating the multiple between the number of the pixel points in the largest area candidate contour and the second largest area candidate contour;
and if the multiple is larger than a preset multiple threshold, eliminating the candidate contour with the largest area.
8. The method of claim 5, wherein said culling outlier contours based on said distance comprises:
determining a self-adaptive outlier distance according to the side length of the candidate contour and the side length median of all the candidate contours;
and if the number of all the candidate contours is larger than a preset number threshold and the distance is larger than the self-adaptive outlier distance, eliminating the candidate contours.
9. The method of claim 1, wherein the manner of culling the abnormal contours in the candidate contour set based on a priori contour shape information comprises one or more of:
Removing candidate contours with the aspect ratio exceeding a first preset proportion threshold according to the aspect ratio relation between the area of the candidate contours and the circumscribed rectangle of the candidate contours;
rejecting candidate contours of which the area ratio of the candidate contours to the circumscribed rectangle of the candidate contours is smaller than a preset ratio threshold;
calculating the distances between the gray scale centroid point of the candidate contour and the center point of the circumscribed rectangle of the candidate contour on the horizontal axis and the vertical axis respectively, and calculating the proportion of each distance to the side length of the candidate contour respectively, and if at least one of the two proportions exceeds a second preset proportion threshold value, rejecting the candidate contour;
determining the roundness of the candidate contour according to the total number of pixel points contained in the candidate contour and the side length of the candidate contour, and eliminating the candidate contour if the roundness is lower than a preset roundness threshold;
calculating the brightness average value of the candidate contour, and eliminating the candidate contour if the brightness average value is smaller than a preset brightness threshold value;
and determining the brightness average value of a preset peripheral area of the circumscribed rectangle of the candidate area and the brightness average value of the candidate contour, and eliminating the candidate contour if the brightness difference between the two brightness average values is smaller than a preset difference value.
10. A virtual display device is characterized by comprising a processor, a memory, a display screen, a communication interface and a multi-view camera, wherein the virtual display device is communicated with a handle through the communication interface, the multi-view camera, the display screen, the memory and the processor are connected through a bus,
the memory stores a computer program, and the processor performs the following operations according to the computer program:
acquiring an original handle image acquired by the multi-camera, and carrying out gray processing on the original handle image to obtain a gray handle image;
determining current ambient brightness according to an original histogram of the gray handle image, and performing binarization processing on the gray handle image by adopting a target binarization threshold matched with the current ambient brightness to obtain a binarization handle image, wherein the target binarization threshold is obtained by weighting based on binarization thresholds solved by at least two binarization methods;
performing contour detection on the binarized handle image to obtain candidate contour sets of a plurality of initial light emitters;
removing abnormal contours in the candidate contour set according to the prior contour shape information and contour comparison information respectively to obtain a target contour set;
And obtaining each target illuminator on the handle for determining the relative pose between the handle and the virtual display device according to each target contour in the target contour set.
CN202211149262.5A 2022-09-21 2022-09-21 Method for detecting illuminator on handle and virtual display device Pending CN116433569A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211149262.5A CN116433569A (en) 2022-09-21 2022-09-21 Method for detecting illuminator on handle and virtual display device
PCT/CN2023/119844 WO2024061238A1 (en) 2022-09-21 2023-09-19 Method for estimating pose of handle, and virtual display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211149262.5A CN116433569A (en) 2022-09-21 2022-09-21 Method for detecting illuminator on handle and virtual display device

Publications (1)

Publication Number Publication Date
CN116433569A true CN116433569A (en) 2023-07-14

Family

ID=87093116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211149262.5A Pending CN116433569A (en) 2022-09-21 2022-09-21 Method for detecting illuminator on handle and virtual display device

Country Status (1)

Country Link
CN (1) CN116433569A (en)

Similar Documents

Publication Publication Date Title
US9710973B2 (en) Low-latency fusing of virtual and real content
US10739849B2 (en) Selective peripheral vision filtering in a foveated rendering system
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
JP6592000B2 (en) Dealing with glare in eye tracking
CN106575357B (en) Pupil detection
US8884984B2 (en) Fusing virtual content into real content
RU2705432C2 (en) Eye tracking through eyeglasses
JP5470262B2 (en) Binocular detection and tracking method and apparatus
US8705868B2 (en) Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method
US8699749B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
US10311589B2 (en) Model-based three-dimensional head pose estimation
US8571266B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
US20120219228A1 (en) Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method
US8718325B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
US20220005257A1 (en) Adaptive ray tracing suitable for shadow rendering
US11308321B2 (en) Method and system for 3D cornea position estimation
JP6221292B2 (en) Concentration determination program, concentration determination device, and concentration determination method
KR101712350B1 (en) Near-eye display device for selecting virtual object, method for selecting virtual object using the device and recording medium for performing the method
CN116433569A (en) Method for detecting illuminator on handle and virtual display device
US8705869B2 (en) Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method
Fritz et al. Evaluating RGB+ D hand posture detection methods for mobile 3D interaction
TWI767179B (en) Method, virtual reality system and recording medium for detecting real-world light resource in mixed reality
TWM650161U (en) Autostereoscopic 3d reality system
CN117372475A (en) Eyeball tracking method and electronic equipment
CN116704974A (en) Luminance adjusting method and device based on pupil diameter change

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination