WO2019153971A1 - 视觉交互装置及标记物 - Google Patents
视觉交互装置及标记物 Download PDFInfo
- Publication number
- WO2019153971A1 WO2019153971A1 PCT/CN2018/125598 CN2018125598W WO2019153971A1 WO 2019153971 A1 WO2019153971 A1 WO 2019153971A1 CN 2018125598 W CN2018125598 W CN 2018125598W WO 2019153971 A1 WO2019153971 A1 WO 2019153971A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- marker
- layer
- visual interaction
- markers
- interaction device
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
Definitions
- the present application relates to the field of interaction technologies, and in particular, to a visual interaction device and a marker.
- augmented reality is a technology that increases user perception of the real world through information provided by a computer system. It superimposes computer-generated virtual objects, scenes, or system prompt information into real scenes to enhance or modify the real world. The perception of the environment or data representing the real world environment.
- the embodiment of the present application provides a visual interaction device and a marker, which can realize recognition tracking in a virtual reality or augmented reality application, and improve the effect of virtual reality or augmented reality.
- a visual interaction device for use in an identification tracking system, characterized in that the visual interaction device comprises a device body, the device body surface being provided with one or more markers for the identification tracking
- An image acquisition device in the system collects to determine position and orientation information of the visual interaction device.
- a marker applied to an identification tracking system characterized in that the marker comprises a plurality of mutually separated sub-markers, each sub-marker having one or more feature points therein, the marker being identified by the marker
- An image acquisition device in the tracking system acquires to determine position and orientation information of the marker.
- a visual interaction device comprising a handle, characterized in that it is applied to an identification tracking system, the visual interaction device further comprising a device body connected to the handle, the device body is provided with a dynamic zone, and one end of the handle is received In the main body of the device, the portion of the handle received in the main body of the device is provided with the instruction area and the non-command area; when the handle is in the first state, the dynamic area displays the command area. When the handle is in the second state, the dynamic zone displays a non-command zone.
- FIG. 1 is a structural diagram of an identification tracking system provided by an embodiment of the present application.
- Figure 2 is a schematic illustration of a marker in one embodiment
- Figure 3a is a structural diagram of a visual interaction device in one embodiment
- Figure 3b is a structural diagram of a visual interaction device in another embodiment
- Figure 3c is a structural diagram of a visual interaction device in another embodiment
- Figure 3d is a structural diagram of a visual interaction device in another embodiment
- Figure 3e is a structural diagram of a visual interaction device in another embodiment
- Figure 4 is a structural view of a planar marking object in one embodiment
- Figure 5 is a cross-sectional view showing the structure of the planar marking object shown in Figure 4 in one embodiment
- Figure 6 is a cross-sectional view showing the structure of the planar marking object shown in Figure 4 in another embodiment
- Figure 7 is a cross-sectional view showing the structure of the planar marking object shown in Figure 4 in another embodiment
- Figure 8 is a cross-sectional view showing the structure of the planar marking object shown in Figure 4 in another embodiment
- Figure 9 is a structural view of a multi-sided marking structure in one embodiment
- Figure 10 is a structural view of the multi-sided marking structure shown in Figure 9 in another perspective;
- Figure 11 is a structural view of a multi-sided marking structure in another embodiment
- Figure 12 is a cross-sectional view showing the structure of the multi-sided marking structure shown in Figure 11 in one embodiment
- Figure 13 is a cross-sectional view showing the structure of the multi-sided marking structure shown in Figure 11 in another embodiment
- Figure 14 is a cross-sectional view showing the structure of the multi-sided marking structure shown in Figure 11 in another embodiment
- 15(a) to 15(i) are schematic perspective views of a device body of a visual interaction device according to another embodiment of the present application.
- Figure 16 is a schematic illustration of a marker in another embodiment
- Figure 17 is a structural view of a multi-sided marking structure in another embodiment
- Figure 18 is an exploded perspective view showing the structure of the multi-sided marking structure in one embodiment
- Figure 19 is a diagram showing an application scenario of a multi-sided mark structure in one embodiment.
- first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining “first” and “second” may include one or more of the features either explicitly or implicitly. In the description of the present application, the meaning of "a plurality" is two or more unless specifically and specifically defined otherwise.
- the terms “installation”, “connected”, “connected”, “fixed” and the like shall be understood broadly, and may be either a fixed connection or a detachable connection, unless otherwise explicitly stated and defined. , or integrated; can be mechanical connection, or can be electrical connection; can be directly connected, or can be indirectly connected through an intermediate medium, can be the internal communication of two elements or the interaction of two elements.
- installation can be understood on a case-by-case basis.
- the identification tracking system 10 includes a head mounted display device 100 and a visual interaction device 200.
- the visual interaction device 200 can include at least one marker, and the marker can include one or more sub-markers distributed according to a rule, each sub-mark having one or more feature points.
- the distribution rules of the sub-markers within each marker may be different, and thus the images corresponding to each marker are different from each other.
- the head mounted display device 100 can acquire an image of the visual interaction device 200, the image can include a marker of the visual interaction device 200, can identify and track the marker of the visual interaction device 200 according to the acquired image, and acquire the visual interaction device 200.
- the position and rotation information thereby displaying the virtual content according to the position and rotation information of the visual interaction device 200, achieves an augmented reality effect.
- the head mounted display device 100 includes a housing (not labeled), an image capture device 110, a display device 120, an optical assembly 130, a processor 140, and a lighting device 150.
- the display device 120 and the image capture device 110 are both electrically connected to the processor; in some embodiments, the illumination device 150 and the image capture device 110 are both disposed through a filter (not labeled) and covered in the housing, the filter The interference light such as ambient light can be filtered. If the illumination device 150 emits infrared light, the filter plate can be an element that filters light other than the infrared light.
- the image capture device 110 is configured to collect an image of the object to be photographed and send it to the processor. Specifically, the image capture device 110 captures an image including at least one of the above-described planar mark plate or multi-face mark structure and transmits it to the processor 140.
- the image capture device 110 can be a single-purpose near-infrared imaging camera. In the embodiment of the present application, the image capturing device 110 adopts an infrared receiving mode and is a single-purpose camera, which not only has low cost, does not require external parameters between binocular cameras, and has low power consumption, and has a higher frame rate under the same bandwidth.
- the processor 140 is configured to output corresponding display content according to the image to the display device 120, and is also used to perform an operation of recognizing and tracking the visual interaction device 200.
- Processor 140 may comprise any suitable type of general purpose or special purpose microprocessor, digital signal processor or microcontroller.
- the processor 140 can be configured to receive data and/or signals from various components of the system via, for example, a network.
- Processor 140 may also process data and/or signals to determine one or more operating conditions in the system.
- the processor 140 when the processor 140 is applied to the head mounted display device, the processor generates image data of the virtual world according to the image data stored in advance, sends it to the display device 120 and displays it through the optical component 130; or can be wired or wireless
- the network receives the transmitted image data of the smart terminal or the computer, generates an image of the virtual world according to the received image data, transmits it to the display device 120 and displays it through the optical component 130; and can also visually capture the image according to the image acquisition device
- the interaction device 200 performs an identification tracking operation, determines display content corresponding to the virtual world, transmits the display content to the display device 120, and displays it through the optical component 130. It can be understood that the processor 140 is not limited to being installed in the head mounted display device 100.
- the head mounted display device 100 further includes a visual range camera 160 disposed on the housing, wherein the visual range camera 160 is electrically coupled to the processor 140 for capturing scenes of real world scenes
- the image is sent to the processor 140.
- the processor 140 can use the visual mileage technology to acquire the position and rotation relationship of the user's head in the real scene according to the scene image captured by the visual range camera 160.
- the processor 140 obtains a specific position and direction change of the head mounted display device 100 through the image sequence acquired by the visual range camera 160, through feature extraction, feature matching and tracking, and motion estimation, etc., to complete navigation and positioning.
- the relative position and rotation relationship of the head mounted display device 100 with the real scene are obtained.
- the processor 140 can calculate the relative position and rotation relationship between the visual interaction device 200 and the real scene, thereby implementing a more complex interaction form. And experience.
- the display device 120 is configured to display the display content output by the processor 140.
- display device 120 can be part of a smart terminal that is coupled to head mounted display device 100, ie, a display screen of the smart terminal, such as a display screen for a cell phone and a tablet.
- the display device 120 can also be a stand-alone display (eg, LED, OLED or LCD) or the like, in which case the display device is fixedly mounted on the housing.
- the housing is provided with a mounting structure for mounting the smart terminal.
- the smart terminal is mounted on the housing through the mounting structure.
- the processor 140 may be a processor in the smart terminal, or may be a processor independently disposed in the housing, and electrically connected to the intelligent terminal through a data line or a communication interface.
- the display device 120 is a display device that is separated from a terminal device such as a smart terminal, it is fixedly mounted on the casing.
- the optical component 130 is configured to direct incident light emitted from the light emitting surface of the display device 120 to a preset position.
- the preset position is an observation position of the user's eyes when the user wears the head mounted display device 100.
- the illumination device 150 is configured to provide light when the image acquisition device 110 captures an image of an object to be photographed. Specifically, the illumination angle of the illumination device 150 and the number of illumination devices 150 can be set according to actual use, so that the emitted illumination light can cover the object to be photographed. Wherein, the illumination device 150 adopts an infrared illumination device capable of emitting infrared light. At this time, the image acquisition device 110 is a near-infrared camera and can receive infrared light. The image quality of the target image acquired by the image capture device 110 is improved by means of active illumination.
- the number of the illumination devices 150 is not limited and may be one or plural.
- the illumination device 150 is disposed in the vicinity of the image acquisition device 110, wherein a plurality of illumination devices 150 may be disposed in the vicinity of the camera of the image acquisition device 110, for example, may be disposed in a circular manner on the image acquisition device.
- the setting method is not limited here.
- visual interaction device 200 can include a planar marker object and a multi-faceted marker structure.
- the planar marking object includes a first marking plate 310 and a second marking plate 320.
- the multi-sided marking structure includes a six-sided marking structure 410 and a twenty-six-sided marking structure 420, and of course other The mark structure of the number of faces is not enumerated here.
- the visual interaction device 200 can be a planar marking object, and the planar marking object can be provided with a marking surface on which the marking of the planar marking object can be placed.
- the planar marking object may be the first marking plate 310, the second marking plate 320, and the like.
- a plurality of markers may be disposed on the first marking plate 310, and the contents of the plurality of markers are different from each other.
- the plurality of markers on the first marking plate 310 may be disposed on the same plane.
- the first marking plate 310 is provided with a marking surface, and all the markings are disposed on the marking surface of the first marking plate 310.
- the feature points of the respective markers on the marking plate 310 are on the marking surface.
- the second marking plate 320 may be provided with a marking object, the second marking plate is provided with a marking surface, the marking is disposed on the marking surface, and the feature points of the marking on the second marking plate 320 are all on the marking surface.
- the number of the second marking plates 320 may be plural, and the contents of the markings of each of the second marking plates 320 are different from each other, and the plurality of second marking plates 320 may be used in combination, as in the The augmented reality corresponding to the tracking system 10 is recognized, or used in combination in an application field such as virtual reality.
- the visual interaction device 200 can be a multi-faceted marking structure.
- the multi-faceted marking structure includes a plurality of marking faces, and at least two of the non-coplanar marking faces are provided with markers.
- the multi-sided marking structure may be a six-sided marking structure 410, or a twenty-six-sided marking structure 420, etc., wherein the six-sided marking structure 410 may include six marking surfaces, each of which Markers are provided on the marking surface, and the patterns of the markings on each surface are different from each other.
- the twenty-six-sided marking structure 420 may include twenty-six faces, wherein twenty-six faces may be provided with 17 marking faces, each marking surface is provided with a marker, and the marking on each face
- the patterns of objects are different from each other.
- the total number of faces of the multi-faceted mark structure and the description of the mark face and the setting of the mark may be set according to actual use, and are not limited herein.
- the visual interaction device is not limited to the above-mentioned planar marker object and the multi-faceted marker structure, and the visual interaction device may be any carrier with a marker, and the carrier may be set according to an actual scene, such as a toy gun.
- Model guns such as game guns, and corresponding markers are set on visual interaction devices such as model guns.
- the position and rotation information of the model guns can be obtained, and the user is virtualized by holding the model gun. Perform game operations in the scene to achieve augmented reality.
- visual interaction device 200 includes a first background and at least one marker distributed over a first background according to a particular rule.
- the marker includes a second background and a plurality of sub-markers distributed to the second background according to a particular rule, each sub-marker having one or more feature points.
- the first background and the second background have a certain degree of discrimination.
- the first background may be black and the second background may be white.
- the distribution rules of the sub-markers in each marker are different, and therefore, the images corresponding to each marker are different from each other.
- the sub-marker may be a pattern having a shape, and the color of the sub-marker has a certain degree of discrimination from the second background in the marker, for example, the second background is white, and the sub-marker is black.
- the sub-marker may be composed of one or more feature points, and the shape of the feature points is not limited, and may be a dot, a ring, or other shapes such as a triangle.
- the marker 210 includes a plurality of sub-markers 212, and each of the sub-markers 212 is composed of one or more feature points 214, and each of the white circular patterns in FIG. Feature point 214.
- the outline of the marker 210 is a rectangle.
- the shape of the marker may also be other shapes, which is not limited herein.
- a rectangular white area ie, a second background
- a plurality of sub-markers in the white area. 212 constitutes a marker 210.
- the marker 210 includes a plurality of sub-markers 212, and each of the sub-markers 212 is composed of one or more feature points 214, which may be black dots or It is a white dot.
- One or more black dots 214 may be included in one sub-marker 212, and one or more white dots 214 may also be included in one sub-marker 212.
- the image capturing device 110 Collecting a target image including the multi-faceted mark structure 500; the processor 140 acquires the target image and related information, and operates to recognize the multi-faceted mark structure 500, and acquires the mark and the image capture device in the target image.
- the position and rotation relationship between the two positions further determines the position and rotation relationship of the multi-faceted mark structure 500 with respect to the head mounted display device 100, so that the virtual scene viewed by the user is at a corresponding position and a rotation angle.
- the user can also enhance the display effect of the virtual image by combining the plurality of multi-faceted mark structures 500 to generate a new virtual image that is further generated in the virtual scene.
- the user can also interact with the virtual scene through the multi-faceted tag structure 500.
- the identification tracking system 100 can also acquire the position and rotation relationship between the head mounted display device 100 and the real scene through the visual range camera 160, thereby acquiring the position and rotation relationship of the multi-faceted mark structure 500 and the real scene, when the virtual scene and When the real scene has a certain correspondence, a virtual scene similar to the real scene can be constructed, which can achieve a more realistic augmented reality effect.
- the embodiments of the present application are mainly described in detail for the above-mentioned visual interaction device.
- the visual interaction device may be a planar marking object, a curved marking object or a three-dimensional marking structure or the like.
- the embodiment of the present application provides a visual interaction device, where the visual interaction device can include a planar marker object and a multi-faceted marker structure, for the above-mentioned identification tracking system applicable to virtual reality and augmented reality.
- the planar marking object may be the first marking 310
- the multi-sided marking structure may be a six-sided marking structure 410 or a twenty-six-sided marking structure 420.
- the visual interaction device can include a device body and one or more markers disposed on a surface of the device body.
- the marker may be disposed on one surface of the planar marker object, as shown in FIG. 3a, the first marker 310 includes the device body 311, and one or more disposed on the surface of the device body 311. Markers 210.
- the marker can be disposed on one or more surfaces of the multi-faceted marker structure.
- the six-sided marking structure 410 may include a device body 411, and a marker 210 disposed on one surface of the device body 411. As shown in FIG.
- the twenty-six-sided marking structure 420 may include a device body 421, and markers 210 disposed on different surfaces of the device body 421, and one or more may be provided for any surface of the visual interaction device. Mark.
- the surface on which the marker is disposed in the visual interaction device can serve as a marking surface for the visual interaction device.
- the device body 411 of the six-sided marking structure 410 includes a plurality of surfaces, and the marker 210 may be disposed at the intersection of two adjacent surfaces in the device body 411, that is, one The markers can be disposed on the surfaces of adjacent multiple planes.
- the markers may also be disposed on the same surface of the device body having different planes, such as on a spherical surface, a curved surface, etc., as shown in FIG. 3e, the marker 210 may be disposed on the spherical representation of the device body 431. on. It can be understood that the manner in which the device body and the marker disposed on the device body in the visual interaction device are not limited to the above-described methods, the device body may be other shapes, and the marker may be set in other manners. Not limited.
- one or more markers in the visual interaction device may be prominently disposed on the body of the device, i.e., the marker is a layer structure disposed on the surface of the device body.
- the surface of the device body may be provided with a groove corresponding to the number of markers, and the marker may be correspondingly disposed in the groove of the surface of the device body.
- the marker disposed on the main body of the device is disposed in a groove on the surface of the main body of the device, and the depth of the groove may be equal to the thickness of the mark so that the outer surface of the mark is flush with the top of the groove, of course, the depth of the groove is This embodiment of the present application is not limited.
- the visual interaction device can be a planar marker object
- FIG. 4 is a structural diagram of a planar marker object in an implementation.
- the planar marker object 300 is used by the image acquisition device 110 and is identified and tracked by the processor 140 to determine the position and rotation relationship between the first marker panel 300 and the image capture device 110.
- the head mounted display device 100 can determine the position and rotation of the virtual scene displayed by the display device 120 to the user relative to the user by recognizing and tracking the position and rotation relationship of the plane marker object 300 relative to the image capture device 110. angle.
- the head mounted display device 100 can also determine the position and rotation angle of the virtual character relative to the user by identifying the tracking plane marker object 300, and present the virtual character to the user through the display device 120 and the optical component 130, wherein the data of the virtual character can be constructed. For pre-stored data, it can also be data downloaded in real time from the cloud.
- the head mounted display device 100 can determine the distance of the planar marker object 300 from the user wearing the head mounted display device 100 based on the positional relationship of the planar marker object 300 and the image capture device 110.
- the head mounted display device 100 identifies the tracking plane marking object 300 through the image capturing device 110 and the processor 140, and the processor 140 can acquire the identity information of the marker in the planar marking object 300, and between the planar marking object 300 and the image capturing device 110.
- the position and rotation relationship may use the coordinate system of the planar marker object 300 as a reference coordinate system, and determine the position of the virtual scene displayed in the head mounted display device 100 according to the reference coordinate system.
- the planar marking object 300 may include a device body (not shown), and the device body may be provided with a base layer 302, and the base layer 302 may be provided with one or more markers 210, wherein when the plurality of markers 210 are plural, the plurality The marker 210 is dispersedly disposed on the base layer 302.
- the base layer 302 may be made of a soft material, and the base layer 302 may also be made of a hard material.
- the base layer 302 may be made of cloth, or may be made of plastic or the like.
- the base layer 302 may be made of a metal material, or Made of alloy material, etc.
- the base layer 302 can be provided with a fold to provide the base layer 302 with a folding function to facilitate folding storage of the planar marker object 300.
- the planar marking object 300 is provided with two folds perpendicular to each other, and the two folded portions can divide the planar marking object 300 into four regions, and the four flat marking objects 300 are marked by two folding portions. After the regions are folded, the planar marker objects 300 can be stacked into one region size.
- the shape of the base layer 302 is not limited, and may be a circle, may be a triangle, may be a square, may be a rectangle, or may be an irregular polygon.
- the base layer 302 is a square, and the size of the base layer 302 can be differently set according to actual needs, which is not limited herein.
- the surface of the base layer 302 adjacent to the marker 210 may be a flat surface, a curved surface, or a curved surface.
- one or more markers 210 are stacked on the base layer 302, wherein each marker 210 includes a first identification layer and a second identification layer on the first identification layer, and the first identification layer Different from the second identification layer, the second identification layer is distinguished from the first identification layer to form the sub-marker 212.
- the first marking layer can be made of a reflective material.
- the first marking layer is used as a reflective layer as an example for description.
- Each of the markers 210 includes a light reflecting layer 216 and a plurality of sub-markers 212 on the light reflecting layer 216. It will be understood that the size of the light reflecting layer 216 is smaller than the size of the base layer 302.
- the light reflecting layer 216 can be disposed at any position on the base layer 302 as needed; when there are a plurality of light reflecting layers 216, the relative positions between the plurality of light reflecting layers 216 are not limited, and the same It can be set as needed.
- the shape of the light reflecting layer 216 is also not limited, and may be a circle, a triangle, a square, a rectangle, or an irregular polygon. As an embodiment, the shape of the light reflecting layer 216 is square.
- the planar marking object 300 may further include a coating layer 304, wherein the coating layer 304 is made of a non-reflective material, for example, the coating layer 304 may be made of ink.
- the coating layer 304 is disposed between the base layer 302 and the light reflecting layer 216 to cover the base layer 302. Specifically, the shape and size of the coating layer 304 may be consistent with the base layer 302, and the base layer 302 may be completely covered. At this time, one or more light reflecting layers 216 are disposed on the coating layer 304.
- the base layer 302 and the sub-marker 212 The reflective layer 216 and the coating layer 304 may together form a planar marking object 300.
- the reflective layer 216 reflects the light to the image capture device 110, and the sub-marker 212 and the coating layer 304 do not reflect the light, and therefore, the image
- the acquisition device 110 can acquire image information having the marker 210.
- one or more coating layers 304 are disposed on the base layer 302 and are in the same layer as the one or more light reflecting layers 216 , one or more coating layers 304 and one or more The reflective layers 216 collectively cover the base layer 302, and the sub-markers 212 are correspondingly disposed on the reflective layer 216.
- the exposed area of the base layer 302 is passed through one or more coating layers. Covering 304, on the one hand, saves the coating layer 304, and on the other hand, it can avoid the influence of the weak reflection that the base layer 302 may have on the accuracy of the identification mark 210.
- the sub-marker 212 can be formed by applying ink on the light-reflecting layer 216, that is, the sub-marker 212 can be formed of ink.
- the sub-marker 212 may also be formed using a reflective material, in which case the entire base layer 302 is covered with ink instead of A hybrid design of the coating layer 204 and the light reflecting layer 216 is used to reflect the sub-marker 212.
- the remaining portion of the planar marking object 300 except the sub-marker 212 is not reflective or only slightly reflected, thereby realizing the image capturing device 110 pair sub-marking. Acquisition of object 212.
- the planar marking object 300 further includes a filter layer 306 .
- the filter layer 306 is laminated on the surface of the reflective layer 216 or laminated on the surface of the coating layer 304 and the reflective layer 216 .
- the filter layer 306 can be located at the uppermost layer of the planar marker object 300.
- the filter layer 306 can filter the interference light to make the image information acquired by the image acquisition device 110 more accurate.
- the filter layer 306 may be a filter capable of filtering light other than the infrared light; when the light emitted by the illumination device 150 is ultraviolet light The filter layer 306 can filter the filter other than the ultraviolet light, thereby filtering the interference light and making the acquired image information more accurate.
- one side of the base layer 302 and the light reflecting layer 216 are disposed with the same number of grooves as the light reflecting layer 216, and one or more light reflecting layers 216 may be correspondingly disposed in the groove, and the height of the light reflecting layer 216 It can be the same depth as the groove.
- each groove and the corresponding light reflecting layer 216 disposed in the groove have the same size and shape, and the depth of each groove is consistent with the height of the light reflecting layer 216 corresponding to the groove.
- the specific size, shape and depth of the groove are not specifically limited herein.
- the base layer 302 is provided with the same number of bumps as the light reflecting layer 216, and the one or more light reflecting layers 216 are correspondingly disposed on the bumps, that is, the light reflecting layer 216 protrudes from the base layer 302.
- the visual interaction device when the visual interaction device is a curved mark object or a three-dimensional mark structure, the visual device may also have any of the above-mentioned base layer 302, marker 210, coating layer 304, and filter layer 306.
- a plurality of features are provided to facilitate recognition/tracking of the visual interaction device by the image capture device, and the description is not exhaustive.
- the visual interaction device provided by the embodiment of the present application is applied to the identification tracking system.
- the head-mounted display device in the identification tracking system can obtain the trace between the visual interaction device and the head-mounted display device by recognizing the marker on the visual interaction device.
- the relative position and rotation relationship, and displaying the virtual content according to the relative position and the rotation relationship can improve the virtual image display effect in the augmented reality or virtual reality application.
- the visual interaction device can be a multi-faceted marker structure.
- the multi-faceted marker structure 400 has a marker 210 to allow identification and tracking by an external image capture device, which may be the image capture device 110 described above.
- the multi-faceted marking structure 400 can include a device body 401 and a handle 10 coupled to the device body 401.
- the handle 10 is provided with a connection (not shown) and the device body 401 is coupled to the connection.
- the device main body 401 is provided with a marker 210.
- the external image capturing device acquires the image of the marker 210 and acquires the information carried by the multi-face marker structure 400, thereby acquiring the identity information and the position and posture information of the multi-face marker structure 400. Identification and/or tracking of the multi-faceted marking structure 400 is achieved.
- the processor 140 in the head mounted display device 100 can identify an image including the marker 210, thereby acquiring relative position and posture information and the like between the multi-faceted marker structure 400 and the head mounted display device 100, and according to The relative position and posture information generates display content, and the display content is displayed to the user via the display device 120 and the optical component 130.
- the specific morphological structure of the apparatus main body 401 is not limited. Specifically, in the embodiment shown in FIGS. 9 and 10, the apparatus main body 401 is a hexahedron including eighteen square faces and eight triangular faces.
- the device body 401 includes a first surface 12 and a second surface 14, the second surface 14 being non-coplanar with the first surface 12. Specifically, the normal direction of the first surface 12 is different from the normal direction of the second surface 12.
- the first surface 12 is provided with a first marker 220
- the second surface 14 is provided with a second marker 230 different from the first marker 220.
- the marker is identified, and the target object corresponding to the first marker 220 and the second marker 230 is confirmed to be the multi-faceted marker structure 400, and the position and posture information of the multi-faceted marker structure 400 is acquired to the multi-faceted marker structure 400. Make identification tracking.
- first surface 12 and the second surface 14 may be disposed adjacent to each other, or the first surface 12 and the second surface 14 may be disposed.
- the first surface 12 and the second surface 14 may be eighteen square faces and any two of the eight triangular faces, and are not limited to the description herein.
- the device body 401 may further include any one or more of a third surface, a fourth surface, a fifth surface, a twenty-sixth surface (none of which are not shown), and accordingly, these Corresponding markers 210 are provided on the surface, and the markers 210 on the plurality of surfaces may be different.
- the head mounted display device 100 can recognize the tracking on the multi-faceted mark structure 400 in real time.
- the marker 210 information further acquires position and orientation information of the multi-faceted marker structure 400.
- the head-mounted display device 100 can recognize or/or track the multi-faceted mark structure 400 according to the mark, thereby completing the work of the user outputting information to the head-mounted display device 100, so that the virtual character corresponding to the multi-faceted mark structure 400 is generated in the virtual world. Complete the corresponding action instructions.
- an identification layer 20 is disposed outside the device body 401, and the marker 210 is formed by the identification layer 20.
- the marking layer 20 covers the outer surface of the device body 401 and may be on the first surface 12, the second surface 14, the third surface, the fourth surface, the fifth surface, the twenty-sixth surface (in the figure)
- Corresponding markers 210 are formed at locations of any one or more of none of them.
- the marking layer 20 includes a base layer 402, a light reflecting layer 404, and a pattern layer 406.
- the base layer 402 covers the surface of the device body 401, the base layer 402 is used to carry the light reflecting layer 404, and the reflective layer 404 and the pattern layer 406 together form the marker 101.
- the base layer 402 is a base fabric made of cloth. It will be understood that in other embodiments, the base layer 402 may be made of cloth or plastic, etc., when the base layer 402 is made of a hard material. In time, it may be made of a metal material or a material such as an alloy. Even in some embodiments, the base layer 402 can be omitted, and the light reflecting layer 404 is disposed directly on the device body 401 and forms the marker 210 together with the pattern layer 406.
- the light reflecting layer 404 is disposed on a side of the base layer 402 that faces away from the apparatus body 401, and the light reflecting layer 404 is configured to reflect light so that the marker 210 can be accurately captured by the image capturing device.
- the pattern layer 406 is disposed on a side of the light reflecting layer 404 facing away from the base layer 402.
- the pattern layer 406 is distinguished from the light reflecting layer 404 and is used to form the marker 210 together with the light reflecting layer 404.
- the pattern layer 406 is a logo pattern drawn using ink.
- the illuminating device 150 emits light to the marker 210, and the light projected onto the reflective layer 404 is reflected by the reflective layer 404 to the image capturing device, and the light projected onto the patterned layer 406 is not It will be reflected or only reflected a small amount, and the image acquisition device can smoothly acquire the image with the marker 210 due to the difference in the reflected light.
- the specific pattern exhibited by the pattern layer 406 is not limited and may be any pattern that is acquired by the image capture device.
- the specific pattern of the pattern layer 406 may be a combination of one or more of any of the following patterns: a circle, a triangle, a rectangle, an ellipse, a wavy line, a line, a curve, etc., and is not limited to the description in this specification.
- the pattern layer 406 can be a logo pattern made of a material other than the ink.
- the pattern layer 406 can be a logo pattern made of a material such as plastic, resin, rubber, or the like.
- the formation of the marker 210 is not limited to the combination of the reflective layer 404 described above and the patterned layer 406, but other forms may be employed.
- the light reflecting layer 404 and the pattern layer 406 can be two different colored objects that are combined to form a pattern for image acquisition device acquisition.
- the marker 210 may be formed separately from the pattern layer 406, in which case the reflective layer 404 may be omitted, and the pattern layer 406 may be disposed directly on the device body 401 and form the marker 210.
- the marker 210 may be formed by electronic display without providing the reflective layer 404 and the pattern layer 406. At this time, the surface of the device body 401 is a display screen, and the image capturing device is displayed on the surface of the control device main body 401. The pattern is acquired to form the marker 210.
- the base layer 402 can be omitted, and the light reflecting layer 404 is directly disposed on the surface of the device body 401.
- the marking layer 20 further includes a filter layer 408 disposed on the pattern layer 406 and covering the reflective layer 404 and the pattern layer 406 .
- the filter layer 408 can filter light other than the light from the illumination device toward the light-reflecting layer 404, preventing the reflective layer 404 from being affected by ambient light when reflecting light, thereby making the marker 101 easier to recognize. It can be understood that the filter layer 408 can be disposed between the light reflecting layer 404 and the pattern layer 406 (as shown in FIG. 14).
- the filter performance of the filter layer 408 can be set according to actual needs. For example, when the multi-faceted marking structure 400 enters the field of view of the image capturing device to be acquired, in order to improve the recognition efficiency, the head-mounted display device usually acquires an image by the auxiliary image capturing device by means of the auxiliary light source, for example, when assisted by an infrared light source.
- the filter layer 408 is used to filter light other than infrared light (such as visible light, ultraviolet light, etc.) so that light other than infrared light cannot pass through the filter layer 408 and infrared light can pass through and reach the reflective layer 404.
- the filtering layer 408 filters ambient light other than the infrared light, so that only the infrared light reaches the reflective layer 404 and is reflected by the reflective layer 404 to the near-infrared image capturing device, thereby It reduces the impact of ambient light on the recognition/tracking process.
- the filter layer 408 is disposed on the pattern layer 406 and covers the reflective layer 404 and the pattern layer 406. Since the filter layer 408 has filtered ambient light other than infrared light (such as visible light, ultraviolet light, etc.), the visible light cannot be When the filter layer 408 is passed, the visible light cannot reach the pattern layer 406, and the pattern layer 406 is not visible in the naked state.
- the mark layer 20 appears as an appearance without a pattern under the naked eye, and the appearance of the multi-sided mark structure 400 can be improved. The effect and the sense of technology. It should be understood that the above-mentioned "infrared light” is merely an example, and it should not be limited. In practical applications, it may be selected according to actual needs, such as ultraviolet light or other rays, which will not be described in detail herein.
- the shape of the device body 401 can be other shapes, and the device body 401 can include at least the first surface 12 and the second surface 14, and the first surface 12 and the second surface 14 are provided with corresponding markers. 101, so that the head-mounted display device can recognize the multi-faceted mark structure 400 according to the mark 210 on the multi-faceted mark structure 400, and acquire/track the posture.
- the device body 401 can be designed as a regular tetrahedron that includes four equilateral triangle faces, wherein the first surface 12 is adjacent to the second surface 14.
- the first surface 12 and the second surface 14 may be spaced apart.
- the device body 401 can be designed as a regular hexahedron comprising six square faces, wherein the first surface 12 is adjacent to the second surface 14.
- the first surface 12 and the second surface 14 may be spaced apart.
- the device body 401 can be designed as an octahedron including eight equilateral triangle faces, wherein the first surface 12 is adjacent to the second surface 14.
- the first surface 12 and the second surface 14 may be spaced apart.
- the apparatus body 401 can be designed as a dodecahedron comprising twelve regular pentagons, wherein the first surface 12 is adjacent to the second surface 14.
- the first surface 12 and the second surface 14 may be spaced apart.
- the device body 401 can also be designed as other polyhedral structures, such as the polyhedral structure shown in FIGS. 15(e) to 15(i), which will not be described in detail herein. It should be understood that the device body 401 is a polyhedral structure including a plurality of faces, a plurality of sides, and a plurality of vertices. Of course, a sphere can be understood as a polyhedron formed of an infinite number of faces. It can be understood that the multi-faceted mark structure can also be a polyhedron whose plane is combined with a curved surface, and a polyhedron whose surface is combined with a curved surface.
- the polyhedral structure of the device body 401 can be considered as a combination of a plurality of polyhedral structures.
- the device body 401 in FIG. 15(c) can be regarded as a polyhedral structure in which two quadrangular pyramids are combined.
- the device body 401 in FIG. 15(e) can be regarded as a polyhedral structure in which four pentagonal pyramids are combined.
- the device body 401 in FIG. 15(g) can be regarded as a plurality of four.
- the polyhedral structure of the device body may be any combination of any one or more of the following: pyramids, prisms, prisms, polyhedrons, spheres, and the like.
- the visual interaction device does not limit the multi-sided marking structure in the above embodiment, and may be any carrier having at least two non-coplanar markers 210.
- the above-mentioned visual interaction device includes a device body 401 in a polyhedral shape, and the device body 401 includes at least a first surface 12 carrying the first marker 220 and a second surface 14 carrying the second marker 230, wherein The second marker 220 is distinguished from the first marker 230.
- the head-mounted display device recognizes the information of the marker 210 on the tracking visual interaction device in real time. And acquiring position and orientation information of the visual interaction device, so that the head mounted display device can recognize and/or track the visual interaction device according to the marker.
- the visual interaction device is more easily captured by the image acquisition device by the marker 210, and the adverse effect of the ambient light on the recognition/tracking can be avoided, thereby improving the accuracy of the visual interaction device being recognized.
- each marker 210 on the visual interaction device can include a plurality of mutually separated sub-markers 212, as shown in FIG. 16, each marker 210 includes a specific number of sub-markers 212 and Not limited, it may be set according to the size range of the marker 210, or may be determined according to specific identification requirements.
- the sub-marker 212 has a certain degree of discrimination from the reflective layer 216 of the marker 210.
- the color of the sub-marker 212 and the color of the reflective layer 216 in the marker 210 may have a large color difference.
- the sub-marker 212 is black and the reflective layer 216 is white.
- the surface of the coating layer 304 may also differ from the light reflecting layer 216 of the marker 210. As shown in FIG. 4, the reflective layer 216 of the marker 210 is white, and the surface of the coating layer 304 is black.
- Each sub-marker 212 includes one or more feature points 214, and each feature point 214 in each sub-marker 212 is separated from one another.
- the number of feature points 214 included in each sub-marker 212 is not limited, and may be determined according to the actual identification requirement and the size of the area occupied by the marker 210.
- the shape of each feature point 214 is not limited in the embodiment of the present application, and may be a polygon such as a triangle or a quadrangle, or may be a circle.
- the sub-marker 212 can be a hollow pattern comprising one or more hollow portions, wherein each hollow portion can serve as a feature point 214, such as a black sub-mark including a white dot 214 in FIG.
- the object 212a is shown.
- the sub-marker 212 may be a plurality of interconnected rings, and the hollow portion in each ring serves as the feature point 214 in the sub-marker 212, as shown in (a) of FIG. Shown.
- a solid pattern may also be disposed on any hollow portion of the sub-marker 212, and the solid pattern is used as the feature point 214 corresponding to the hollow portion of the sub-marker 212, such as the sub-marker in FIG. 212b is shown.
- a hollow pattern such as a circular ring, may be provided, and a hollow pattern of the hollow portion is used as a corresponding one of the feature points 214 of the sub-marker 212.
- a nested hollow pattern may be set in the sub-marker, such as a nested circle, and the last nested hollow circle is used as the feature point 214.
- the number of the nesting layers of the hollow pattern in the sub-marker 212 can be set according to the actual identification requirement, or determined according to the resolution of the image capturing device, which is not limited in the embodiment of the present application.
- each sub-marker 212 of the marker 210 there may be at least one sub-marker 212 consisting of a solid pattern separated from each other, each solid pattern being a feature point 214.
- Each of the black solid circles 214 separated from each other in FIG. 9 may constitute a sub-marker 212c, and each black solid circle is a feature point 214 in the sub-marker 212c.
- the identity information of each marker 210 is determined, and the individual markers 210 in the identification tracking system can be different from each other.
- the markers 210 are different, and the number of the daughter markers 212 included in the marker 210 may be different from the number of the daughter markers included in the other markers.
- the identification tracking system 10 includes three markers 210, and the number of the sub-markers 212 of the three markers 210 are x, y, and z, respectively, where x, y, and z may be integers greater than or equal to 1. , x, y, z are not equal to each other.
- the processor 140 in the head mounted display device 100 determines the identity of the marker 210 by identifying the number of child markers 212 contained in the marker 210.
- the markers 210 are different, and the number of feature points 214 in which the at least one child marker 212 is present in the marker 210 may be different from the number of feature points 214 in the other marker 210 neutron marker 212.
- the processor 210 can determine the identity of the subtag 212 corresponding to the marker 210 by identifying the number of feature points 214 in the subtag 212. For example, the number of feature points 214 of one of the markers 210 in the marker 210 is three, and none of the other markers 210 includes the number of the marker 212 having three of the feature points 214.
- the processor 140 identifies the sub-marker 212 having the number of the feature points 214 of three, the identity of the marker 210 corresponding to the sub-marker 212 can be determined, and the marker 210 is in the preset marker model.
- the marker 210 corresponding to the sub-marker 212 having three feature points is included.
- the markers 210 are different, and the shape of the feature points 214 in which the at least one sub-marker 212 is present in the marker 210 may be different from the shape of the feature points 214 of the sub-markers 212 in the other markers 210.
- the processor 210 can determine the identity of the sub-tag 212 corresponding to the marker 210 by identifying the shape of the feature point 214 in the sub-marker 212. For example, one of the markers 210 includes a feature point 214 that is a solid circle, and none of the other markers 210 includes a child marker 212 whose feature point 214 is a solid circle.
- the processor 140 When the processor 140 recognizes that the feature point 214 is a solid circle of the child marker 212, the identity of the marker 210 corresponding to the child marker 212 may be determined, and the marker 210 is a preset marker model, including The marker 210 corresponding to the solid circled sub-tag 212.
- the markers 210 are different, and the number of nesting layers of the hollow pattern in the at least one sub-marker 212 in the marker 210 may be different from the number of nesting layers of the sub-marker 212 of the other markers 210. Therefore, when the processor 140 identifies the sub-marker 212 corresponding to the hollow figure having the nesting layer number, the identity of the tag 210 corresponding to the sub-marker 212 can be determined. For example, only one hollow portion of one of the markers 210 is provided with a solid dot that serves as the feature point 214 of the subtag 212.
- the processor 140 recognizes the sub-marker 212 with a solid dot disposed in the hollow portion, the identity of the marker 210 corresponding to the sub-marker 212 can be determined, and the marker 210 is in the preset marker model.
- a marker 210 corresponding to the sub-marker 220 of a solid dot is disposed in the hollow portion.
- the difference of the markers 210 may also be that the number combinations of the markers 210 are different from the combinations of the numbers corresponding to the other markers 210.
- the number of feature points 221 of each sub-marker 220 in each marker 210 constitutes a quantity combination in the marker 210.
- the marker 210 includes four sub-markers 212 , wherein the number of feature points of the sub-marker 212 a is 3, the number of feature points of the sub-marker 212 b is 2, and the number of feature points of the sub-marker 212 c 5, the number of feature points of the sub-marker 212d is 1, and the number of feature points 214 of the four sub-markers forms a quantity combination in the marker 210.
- the number combination can be a combination of numbers that arrange the sub-markers in a certain direction.
- the number combination of the sub-markers arranged in the sequential needle direction may be 3152, and the number combination in the counterclockwise direction may be 3251 or the like, wherein the sub-marker which is the starting point of the quantity combination may be any selected one of the sub-marks.
- the sub-marker having the largest or smallest number of feature points may be selected, and is not limited herein.
- the number combination corresponding to the marker 210 can also be expressed in other ways, and is not limited to the manner described above.
- the manner in which the marker 210 is distinguished is not limited, and may be one of the above various methods, or may be any combination of multiple modes.
- the number of sub-markers 212 included in one marker 210 is different from the number of sub-markers 212 included in other markers 210, and when the processor 140 recognizes the sub-marker 212 having the number, The identity of the marker 210 corresponding to the child marker 212 can be determined; the number of feature points 214 of the child marker 212 in the other marker 210 is different from the number of feature points 214 of the child marker 212 in the other marker 210, when the processor When the sub-marker 212 having the number of the feature points 214 is identified, the identity of the marker 210 corresponding to the sub-marker 212 can be determined.
- Other combinations may be used, and are not limited herein.
- the marker 210 is not limited to the above-mentioned embodiment, and may be other shapes.
- the marker 210 may be a distinguishable geometric figure (such as a circle, a triangle, a rectangle, an ellipse, a wavy line, Straight lines, curves, etc.), predetermined patterns (such as animal heads, commonly used symbols such as traffic signs, etc.) or other patterns.
- predetermined patterns such as animal heads, commonly used symbols such as traffic signs, etc.
- the marker 210 can be an identification code such as a barcode, a two-dimensional code, or the like.
- the visual interaction device in the identification tracking system provided in the embodiment of the present application includes a device body and a marker disposed on the device body.
- Each of the markers is different from each other, and each marker is disposed at a different position of the apparatus body, and therefore, the head mounted display device can recognize the identity information of each marker, according to the position of each marker in the apparatus body and the current marker.
- the position and posture determine the posture information of the visual interaction device, the determination result can be more accurate, and the posture information of the visual interaction device can be obtained without setting the sensor in the visual interaction device, thereby reducing the cost and power consumption of the visual interaction device.
- another multi-sided marking structure 500 is provided.
- the multi-sided marking structure 500 is substantially the same as the multi-sided marking structure 400 provided in the above embodiment.
- the multi-sided marking structure 500 also includes a device body. 501, handle 50, the device body 501 is similarly provided with a first surface 512, a first marker 5121, a second surface 514 and a second marker 5141.
- the multi-sided marking structure 500 in this embodiment differs in that:
- the device body 501 further includes a third surface 516, and the third surface 516 is provided with a dynamic area 5161 for dynamically displaying content according to user requirements, so that the image capturing device can acquire different content from the dynamic area 5161. Specifically, the user causes the dynamic area 5161 to present different content by operating the handle 50.
- the dynamic area 5161 is a hollow structure, and one end of the handle 50 is rotatably received in the apparatus main body 501, and the command area on the handle 50 can be exposed to the outside through the hollow structure of the dynamic area 5161 to be image-collected.
- Device acquisition Specifically, the hollow structure of the dynamic region 5161 is a through hole.
- a portion of the handle 50 received in the device body 501 is provided with a command area (not shown) and a non-command area (not shown), and the external force is used to drive the handle 50 to rotate relative to the device body 501, so that the command can be made.
- the area or the non-command area is in the hollow structure, so that the image capturing device can acquire the instruction state change of the multi-faceted mark structure 500 from the dynamic area 5161, thereby acquiring the action instruction of the user, so that the character in the virtual world executes the corresponding action instruction. .
- the command area is a reflective block
- the user rotates the handle 50 relative to the device body 501 such that the hollowed out structure of the dynamic area 5161 has or does not appear to be a reflective block (ie, the command area or the non-command area is presented in the hollow structure).
- the reflective block can be collected by the image capturing device with the aid of the illumination device, two states of the multi-faceted mark structure 500 can be acquired, and the user can realize the virtual character by switching the state of the multi-faceted mark structure 500. The corresponding operation in the virtual world.
- the first surface 512 and the second surface 514 are any two of eighteen square faces, and the third surface 516 is any one of eight triangular faces. It can be understood that in other embodiments, the number of the third surfaces 516 may be one or more. When the third surface 516 is plural, the plurality of third surfaces 516 are all triangular faces, and the first surface 512 and second surface 514 are any two of the eighteen square faces.
- the handle 50 is provided with an operating portion 502 for controlling the switching of the command zone.
- the user can change the display content of the dynamic area 5161 so that at least two instruction areas are presented in the hollow structure to allow the image collection device to acquire an image containing the instruction area from the dynamic area 5161, thereby making the virtual world
- the role completes the corresponding action instruction.
- the operating portion 502 can be a physical button, a virtual button, a knob, a button, or other triggering member.
- the operation of the operating portion 502 by the user can be: pressing, touching, rotating, dialing Movement or other triggering actions, etc., are not limited to those described in this specification.
- the dynamic area 5161 is used to display an instruction area or a non-instruction area according to the state of the handle 50 to allow the image capture device to acquire status information of the instruction area multi-faceted mark structure 500 from the dynamic area 5161. For example, when the handle 50 is in the first state, the dynamic area 5161 displays the command area, and when the handle 50 is in the second state, the dynamic area 5161 displays the non-command area.
- the device main body 501 is provided with a dynamic area 5161 for displaying an instruction area or a non-instruction area according to the state of the handle 50.
- the head mounted display apparatus recognizes the instruction area or the non-instruction area on the tracking visual interaction apparatus 100 in real time, thereby acquiring Control status information of the visual interaction device.
- the visual interaction device is more easily collected by the image acquisition device through the setting of the command area, which can avoid the adverse effects of the ambient light on the recognition/tracking, thereby improving the accuracy of the visual interaction device being recognized.
- another multi-sided marking structure 600 is provided.
- the multi-sided marking structure 600 is substantially the same as the multi-sided marking structure 500 provided in the above embodiment, and the multi-sided marking structure 600 also includes a device body. 601, handle 60, the device body 601 is similarly provided with a first surface 612, a first marker 6121, a second surface 614 and a second marker 6141.
- the device body 601 is similarly provided with a dynamic region 6161.
- the multi-sided marking structure 600 of the present embodiment differs in that:
- the device body 601 includes a first housing 611 and a second housing 613.
- the first housing 611 and the second housing 613 are substantially identical in structure.
- the first housing 611 and the second housing 613 are fastened to each other to form the device body 601. Polyhedral structure.
- the first housing 611 and the second housing 613 are each provided with a dynamic area 6161.
- the handle 60 includes a grip 61 and a control member 63.
- the holding member 61 is connected to the first housing 611, and the control member 63 is connected to the holding member 61 and received in the device body 601.
- the control member 63 can be controlled to rotate within the apparatus body 601 such that the command zone or the non-command zone is presented within the dynamic zone 6161.
- the setting of the instruction area and the non-instruction area is substantially the same as the setting of the instruction area and the non-instruction area in the second embodiment.
- the control member 63 includes a connector 631, a first control portion 633, and a second control portion 635.
- the connecting member 631 is coupled to the grip member 61 and extends in the apparatus main body 601 in the axial direction of the grip member 61.
- the first control portion 633 is disposed at one end of the connecting member 631 away from the grip member 61 and correspondingly received in the receiving space formed by the first housing 611.
- the second control unit 635 is connected to the side of the first control unit 633 that faces away from the connector 631 and correspondingly accommodates the receiving space formed by the second housing 613.
- At least one of the first control unit 633 and the second control unit 635 is provided with a command area and a non-command area adjacent to the command area.
- the first control unit 633 is provided with a first command area 6332 and a first non-command area 6334.
- the external control unit 633 drives the first control unit 633 to rotate relative to the device body 601, so that the first command area 6332 and the first Any one of the non-instruction areas 6334 is presented in the dynamic area 6161, enabling the image capture device to acquire the instruction state change of the multi-faceted mark structure 600 from the dynamic area 6161, thereby acquiring the action instruction of the user, so that the character in the virtual world is executed.
- Corresponding action instructions is provided with a second instruction area 6352 and a second non-instruction area 6354.
- the handle 60 is provided with an operating portion 62 that is coupled to the first control portion 633 and the second control portion 635.
- the user can control the first control unit 633 and the second control unit 635 to rotate relative to the apparatus main body 601, thereby changing the display content of the dynamic area 6161.
- the rotational motion of the first control portion 633 and the rotational motion of the second control portion 635 do not affect each other, or the first control portion 633 and the second control portion 635 are Rotate in conjunction.
- the multi-sided marking structure 400 of the example may include the third surface 516 of the multi-sided marking structure 500, the structure of the dynamic area 5161, and may also include the structure of the apparatus main body 601 of the multi-sided marking structure 600 and the handle 60;
- the multi-faceted marking structure 500 in the example may include the apparatus body 601 of the multi-faceted marking structure 600 and the structure of the handle 60, etc., and the description is not exhaustive.
- FIG. 19 is a diagram showing an application scenario of a multi-sided mark structure in one embodiment.
- the multi-faceted tag structures 400, 500, 600 are applied to an identification tracking system and used as a visual interaction device for the identification tracking system.
- the identification tracking system includes an image capture device 1910, a head mounted display device 1900, and multi-faceted marker structures 400, 500, 600.
- the head mounted display device 1900 includes a control center 1902 that is a transflective lens and a display 1904 that is used to deliver image content to the display 1904 to enable a user to view image content in the display 1904. While the user sees the image content in the display 1904, the front environment can be observed through the display 1904. Therefore, the image obtained by the user's eyes is a virtual reality superimposed scene in which the image content is superimposed with the front environment.
- the image capture device 1910 is electrically coupled to the head mounted display device 1900 for acquiring environmental information within its field of view.
- the multi-faceted marking structures 400, 500, 600 are for hand held by the user and allow the user to issue control information to the control center 1902 via the multi-faceted marking structures 400, 500, 600.
- the user changes the posture of the multi-faceted marking structure 400, 500, 600, and causes the image capturing device 1910 to acquire an image of the marker on the multi-faceted marking structure 400, 500, 600, and the marking
- the image of the object is transmitted to the control center 1902, and the control center 1902 acquires the postures (including the still posture, the motion posture, and the like) of the multi-faceted marker structures 400, 500, 600 based on the image of the marker and delivers the corresponding content or execution to the display 1904.
- Corresponding actions e.g., image capture device 1910 continues to identify and track multi-faceted marker structures 400, 500, 600).
- the user interacts with the virtual item (such as grabbing, selecting, etc.) in the virtual reality scene by adjusting the posture of the multi-faceted mark structure 400, 500, 600; for example, the user holds the visual interaction device 100 as specific
- the control center 1902 projects the image of the sword in the display 1904 and superimposes the image of the sword on the images of the multi-sided marking structure 400, 500, 600.
- the scene that the user sees in the display 1904 is thus the scene of the user's hand-held sword.
- image acquisition device 1910 is provided with a transmitter 1912 for emitting light to visual interaction device 100, and an image acquisition unit 1914 for projecting light from emitter 1912 to multi-faceted marker structures 400, 500, 600 is then reflected to image acquisition unit 1914 to enable image acquisition unit 1914 to acquire images of the markers on multi-faceted marker structures 400, 500, 600.
- the visual interaction device can also be applied to other scenarios, for example, the visual interaction device can be used as a robot.
- the remote controller is applied to the working scene of the robot; for example, the visual interaction device can be used as a human-machine interaction device, and is applied to an interaction scene between the user and the electric appliance, the electronic device, etc., and the description is not described in detail herein.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种视觉交互装置,应用于识别跟踪系统,视觉交互装置包括装置主体,装置主体表面设有一个或多个标记物,标记物用于被识别跟踪系统中的图像采集装置采集,以确定视觉交互装置的位置及姿态信息。
Description
相关申请的交叉引用
本申请要求于2018年02月06日提交中国专利局的申请号为CN201810118719.3名称为“视觉交互装置”、于2018年02月06日提交中国专利局的申请号为CN201810119298.6名称为“视觉交互装置”、于2018年02月06日提交中国专利局的申请号为CN201810119299.0名称为“视觉交互装置”及于2018年02月06日提交中国专利局的申请号为CN 201810119871.3名称为“视觉交互装置及标记物”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及交互技术领域,具体而言,涉及一种视觉交互装置及标记物。
近年来,随着科技的进步,增强现实(AR,Augmented Reality)和虚拟现实(VR,Virtual Reality)等技术已逐渐成为国内外研究的热点。以增强现实为例,增强现实是通过计算机系统提供的信息增加用户对现实世界感知的技术,其将计算机生成的虚拟物体、场景或系统提示信息叠加到真实场景中,来增强或修改对现实世界环境或表示现实世界环境的数据的感知。
在增强现实及虚拟现实的交互系统中,需要对系统中的视觉交互装置进行识别跟踪,给虚拟场景提供位置参考点,从而实现虚拟现实或增强现实的效果。
发明内容
本申请实施例提供一种视觉交互装置及标记物,可以实现在虚拟现实或增强现实应用中的识别跟踪,提高虚拟现实或增强现实的效果。
一种视觉交互装置,应用于识别跟踪系统,其特征在于,所述视觉交互装置包括装置主体,所述装置主体表面设有一个或多个标记物,所述标记物用于被所述识别跟踪系统中的图像采集装置采集,以确定所述视觉交互装置的位置及姿态信息。
一种标记物,应用于识别跟踪系统,其特征在于,所述标记物包括多个相互分离的子标记物,每个子标记物内具有一个或多个特征点,所述标记物被所述识别跟踪系统中的图像采集装置采集,以确定所述标记物的位置及姿态信息。
一种视觉交互装置,包括手柄,其特征在于,应用于识别追踪系统,所述视觉交互装置还包括连接于所述手柄的装置主体,所述装置主体设有动态区,所述手柄的一端收容于所述装置主体中,所述手柄收容于所述装置主体内的部分设有所述指令区以及非指令区;当所述手柄处于第一状态时,所述动态区显示指令区,当所述手柄处于第二状态时,所述动态区显示非指令区。
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本申请实施例提供的识别跟踪系统的架构图;
图2为一个实施例中标记物的示意图;
图3a为一个实施例中视觉交互装置的结构图;
图3b为另一个实施例中视觉交互装置的结构图;
图3c为另一个实施例中视觉交互装置的结构图;
图3d为另一个实施例中视觉交互装置的结构图;
图3e为另一个实施例中视觉交互装置的结构图;
图4为一个实施例中平面标记物体的结构图;
图5为图4所示的平面标记物体在一个实施例中的结构剖面图;
图6为图4所示的平面标记物体在另一个实施例中的结构剖面图;
图7为图4所示的平面标记物体在另一个实施例中的结构剖面图;
图8为图4所示的平面标记物体在另一个实施例中的结构剖面图;
图9为一个实施例中多面标记结构体的结构图;
图10为图9所示的多面标记结构体在另一视角的结构图;
图11为另一个实施例中多面标记结构体的结构图;
图12为图11所示的多面标记结构体在一个实施例中的结构剖面图;
图13为图11所示的多面标记结构体在另一个实施例中的结构剖面图;
图14为图11所示的多面标记结构体在另一个实施例中的结构剖面图;
图15(a)~图15(i)是本申请其他实施例提供的视觉交互装置的装置主体的立体示意图;
图16为另一个实施例中标记物的示意图;
图17为另一个实施例中多面标记结构体的结构图;
图18为一个实施例中多面标记结构体的结构分解图;
图19为一个实施例中多面标记结构体的应用场景图。
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本申请中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
请参阅图1,示出了本申请实施例提供的识别跟踪系统。识别跟踪系统10包括头戴显示装置100和视觉交互装置200。
视觉交互装置200可包括至少一个标记物,标记物可包括按照一定规则分布的一个或 多个子标记物,每个子标记物具有一个或多个特征点。在一个实施例中,每个标记物内的子标记物的分布规则可不同,因此,每个标记物所对应的图像互不相同。
头戴显示装置100可采集视觉交互装置200的图像,该图像可包含视觉交互装置200的标记物,可根据采集的图像对视觉交互装置200的标记物进行识别追踪,并获取视觉交互装置200的位置和旋转信息,从而根据视觉交互装置200的位置和旋转信息显示虚拟内容,实现增强现实的效果。
头戴显示装置100包括壳体(未标识)、图像采集装置110、显示装置120、光学组件130、处理器140和照明装置150。
其中,显示装置120和图像采集装置110均与处理器电连接;在一些实施方式中,照明装置150和图像采集装置110均通过滤光板(未标识)装设并覆盖在壳体内,该滤光板可过滤环境光等干扰光线,如照明装置150发射红外光,则该滤光板可为过滤除红外光线外的光线的元件。
图像采集装置110用于采集待拍摄物体的图像并发送至处理器。具体地,图像采集装置110采集包含有上述平面标记板或多面标记结构体中至少一个的图像,并发送至处理器140。作为一种实施方式,该图像采集装置110可以是单目的近红外成像相机。在本申请的实施方式中,图像采集装置110为采用红外接收方式且为单目的相机,不仅成本低,无需双目相机之间的外参,而且功耗低,同等带宽下帧率更高。
处理器140用于根据图像输出对应的显示内容至显示装置120,还用于对视觉交互装置200进行识别跟踪的运算。
处理器140可以包括任何适当类型的通用或专用微处理器、数字信号处理器或微控制器。处理器140可以被配置为经由例如网络从系统的各种组件接收数据和/或信号。处理器140还可处理数据和/或信号以确定系统中的一个或多个操作条件。例如,当处理器140应用于头戴显示装置时,处理器根据预先存储的图像数据生成虚拟世界的图像数据,将其发送至显示装置120并通过光学组件130进行显示;也可以通过有线或无线网络接收智能终端或计算机的发送的图像数据,根据所接收的图像数据生成虚拟世界的图像,将其发送至显示装置120并通过光学组件130进行显示;还可以根据图像采集装置采集的图像对视觉交互装置200进行识别跟踪运算,并确定在虚拟世界中对应的显示内容,将显示内容发送至显示装置120并通过光学组件130进行显示。可以理解的是,处理器140并不限定于装设在头戴显示装置100内。
在一些实施方式中,头戴显示装置100还包括设置在壳体上的视觉里程相机160,其中,视觉里程相机160与处理器140电连接,该视觉里程相机160用于采集外界真实场景的场景图像,并将场景图像发送至处理器140。在用户穿戴头戴显示装置100时,处理器140根据该视觉里程相机160采集的场景图像,可利用视觉里程技术获取用户的头部在真实场景中的位置及旋转关系。具体地,处理器140通过视觉里程相机160获取的图像序列,经过特征提取、特征匹配与跟踪,以及运动估计等的处理,得出头戴显示装置100具体的位置和方向的变化,完成导航定位,进而获得头戴显示装置100与真实场景的相对位置及旋转关系。处理器140根据视觉交互装置200与头戴显示装置100之间的相对位置及旋转信息,可以推算出视觉交互装置200与真实场景之间的相对位置及旋转关系,从而可以实现更复杂的交互形式与体验。
显示装置120用于显示处理器140输出的显示内容。在一些实施例中,显示装置120可以是与头戴显示装置100连接的智能终端的一部分,即智能终端的显示屏,例如手机和平板电脑的显示屏。在另一些实施例中,显示装置120还可以是独立的显示器(例如,LED,OLED或LCD)等,则此时显示装置固定安装在壳体上。
需要说明的是,当显示装置120为智能终端的显示屏时,壳体上设置有用于安装该智能终端的安装结构。在使用时,将智能终端通过安装结构安装在壳体上。则处理器140可以是智能终端内的处理器,也可以是独立设置在壳体内的处理器,并与智能终端通过数据 线或通信接口电连接。另外,当显示装置120为与智能终端等终端设备分离的显示装置时,固定安装在壳体上。
光学组件130用于将显示装置120的出光面发出的入射光射向预设位置处。其中,预设位置处为用户佩戴头戴显示装置100时,用户双眼的观察位置。
照明装置150用于为图像采集装置110采集待拍摄物体的图像时提供光线。具体地,照明装置150的照明角度以及照明装置150的数量,可以根据实际使用而设定,以使所发射的照明光线能够覆盖待拍摄物体。其中,照明装置150采用红外光照明装置,能够发出红外光线,此时图像采集装置110为近红外相机,可以接收红外光线。通过主动照明的方式,提高图像采集装置110采集的目标图像的图像质量。照明装置150的数量不限,可以是一个,也可以是多个。在一些实施方式中,照明装置150设置在图像采集装置110的附近,其中,可以是多个照明装置150设置在图像采集装置110的摄像头的附近,例如,可以按圆的方式设置在图像采集装置110周围,设置方式在此不作限定。
在一个实施例中,视觉交互装置200可包括平面标记物体和多面标记结构体。如图1所示,该平面标记物体包括第一标记板310和第二标记板320,该多面标记结构体包括六面标记结构体410和二十六面标记结构体420,当然还可以是其他面数的标记结构体,在此不一一列举。
视觉交互装置200可以为平面标记物体,平面标记物体可设有一个标记面,平面标记物体的标记物可设置在该标记面上。在一个实施例中,该平面标记物体可以是第一标记板310,也可以是第二标记板320等。第一标记板310上可设置有多个标记物,多个标记物的内容互不相同。第一标记板310上的多个标记物可设置在同一个平面上,第一标记板310设有一个标记面,所有的标记物均设置在第一标记板310的标记面上,则第一标记板310上各个标记物的特征点均在该标记面上。第二标记板320上可设置有一个标记物,该第二标记板设有一个标记面,该标记物设置在标记面上,第二标记板320上标记物的特征点也全部在该标记面上的。在识别跟踪系统10中,第二标记板320的数量可以是多个,且每个第二标记板320的标记物的内容互不相同,多个第二标记板320可以组合使用,如在该识别跟踪系统10对应的增强现实,或虚拟现实等应用领域中组合使用。
视觉交互装置200可以为多面标记结构体。该多面标记结构体包括多个标记面,且其中至少两个不共面的标记面上设置有标记物。在一个实施例中,该多面标记结构体可以是六面标记结构体410,也可以是二十六面标记结构体420等,其中,六面标记结构体410可包括6个标记面,每个标记面上均设置有标记物,且每个面上的标记物的图案互不相同。
二十六面标记结构体420可包括二十六个面,其中,二十六个面中可设有17个标记面,每个标记面上均设置有标记物,且每个面上的标记物的图案互不相同。当然,上述的多面标记结构体的总面数以及标记面的描述和标记物的设置,可以根据实际使用而设置,在此不做限定。
需要说明的是,在其他实施方式中,视觉交互装置并不限定于上述平面标记物体和多面标记结构体,视觉交互装置可以是任何具有标记物的载体,载体可以根据实际场景设置,如玩具枪、游戏枪等模型枪,在模型枪等视觉交互装置上设置相应的标记物,通过识别追踪模型枪上的标记物,能够获取模型枪的位置和旋转信息,用户通过握持该模型枪在虚拟场景中进行游戏操作,实现增强现实的效果。
在一个实施例中,视觉交互装置200包括第一背景和按照特定规则分布于第一背景的至少一个标记物。标记物包括第二背景以及按照特定规则分布于第二背景的若干子标记物,每个子标记物具有一个或多个特征点。其中,第一背景和第二背景有一定的区分度,例如,可以是第一背景为黑色,第二背景为白色。每个标记物内的子标记物的分布规则不同,因此,每个标记物所对应的图像互不相同。
子标记物可以是具有一定形状的图案,且该子标记物的颜色与标记物内的第二背景有一定的区分度,例如,第二背景为白色,而子标记物的颜色为黑色。子标记物可以是由一 个或多个特征点构成,该特征点的形状不做限定,可以是圆点、圆环,也可以是三角形等其他形状。
图2为一个实施例中标记物的示意图,标记物210中包含的子标记物可以采用不同的形式。如图2中的(a)所示,标记物210内包括多个子标记物212,而每个子标记物212由一个或多个特征点214构成,图2中的每个白色圆形图案为一个特征点214。标记物210的轮廓为矩形,当然,标记物的形状也可以是其他形状,在此不做限定,图2中,矩形的白色区域(即第二背景)以及该白色区域内的多个子标记物212构成一个标记物210。
如图2中的(b)所示,标记物210内包括多个子标记物212,而每个子标记物212由一个或多个特征点214构成,该特征点214可以是黑色圆点,也可以是白色圆点。一个子标记物212中可以包含一个或多个黑色圆点214,一个子标记物212中也可包含一个或多个白色圆点214。
上述的识别跟踪系统10在实际应用时,用户在佩戴头戴显示装置100,进入预设的虚拟场景时,当该多面标记结构体500在图像采集装置110的视野范围内时,图像采集装置110采集到包含有多面标记结构体500的目标图像;处理器140获取到该目标图像及相关信息,运算识别出该多面标记结构体500,并获取到该目标图像内的标记物与图像采集装置之间的位置与旋转关系,进而得到多面标记结构体500相对于头戴显示装置100的位置及旋转关系,使得用户观看到的虚拟场景在相应的位置及旋转角度上。用户还可以通过多个多面标记结构体500的结合以在虚拟场景内进一步产生的新的虚拟图像,提高虚拟图像的显示效果。用户还可以通过多面标记结构体500实现与虚拟场景的交互。此外,该识别追踪系统100还可以通过视觉里程相机160获取头戴显示装置100与真实场景的位置与旋转关系,进而可以获取多面标记结构体500与真实场景的位置和旋转关系,当虚拟场景和真实场景有一定的对应关系时,可以构建出一个与真实场景类似的虚拟场景,可以实现更真实的增强现实的效果。
基于上述可应用于虚拟现实及增强现实中的识别跟踪系统,本申请的实施例主要针对上述视觉交互装置进行详细的说明。在本实施例中,该视觉交互装置可以为平面标记物体、曲面标记物体或立体标记结构体等。
请参阅图3a-图3e,针对上述可应用于虚拟现实及增强现实中的识别跟踪系统,本申请实施例提供了一种视觉交互装置,其中,视觉交互装置可包括平面标记物体和多面标记结构体,在一个实施例中,平面标记物体可以是第一标记物310,多面标记结构体可以是六面标记结构体410,也可以是二十六面标记结构体420。
视觉交互装置可包括装置主体,以及设置于装置主体表面的一个或多个标记物。当视觉交互装置为平面标记物体时,标记物可设置在平面标记物体的一个表面上,如图3a所示,第一标记物310包括装置主体311,以及设置于装置主体311表面的一个或多个标记物210。当视觉交互装置为多面标记结构体时,标记物可设置在多面标记结构体的一个或多个表面上。如图3b所示,六面标记结构体410可包括装置主体411,以及设置在装置主体411的一个表面上的标记物210。如图3c所示,二十六面标记结构体420可包括装置主体421,以及设置在装置主体421的不同表面的标记物210,对于视觉交互装置的任意一个表面,也可设置一个或多个标记物。视觉交互装置中设置有标记物的表面可作为视觉交互装置的标记面。在一些实施方式中,如图3d所示,六面标记结构体410的装置主体411包括多个表面,标记物210可以设置在装置主体411中两个相邻表面的交界处,也即,一个标记物可设置在相邻的多个平面的表面上。在一些实施方式中,标记物也可设置在装置主体具有不同平面的同一表面上,例如设置在球形表面、弧形表面等,如图3e所示,标记物210可设置在装置主体431球形表示上。可以理解地,视觉交互装置中装置主体及在装置主体上设置的标记物的方式并不仅限于上述描述的几种,装置主体也可以是其他形状,标记物也可按照其他方式进行设置,在此不作限定。
在一个实施例中,视觉交互装置中的一个或多个标记物,可以突出设置在装置主体的, 即标记物为设置在装置主体表面的一个层结构。在一个实施例中,装置主体的表面可开设有对应标记物数量的凹槽,标记物可以对应设置在装置主体表面的凹槽内。装置主体上设置的标记物,设置于该装置主体表面的凹槽内,凹槽的深度可以等于标记物的厚度,使标记物的外表面与凹槽顶部齐平,当然,凹槽的深度在本申请实施例中并不限定。
在一个实施例中,该视觉交互装置可以是平面标记物体,图4为一个实施中平面标记物体的结构图。如图4所示,平面标记物体300用于被图像采集装置110采集并被处理器140进行识别跟踪,进而确定第一标记板300和图像采集装置110之间的位置及旋转关系。在增强现实或虚拟现实领域,头戴显示装置100通过识别和追踪平面标记物体300相对图像采集装置110的位置及旋转关系,可以确定显示装置120显示给用户的虚拟场景相对于用户的位置及旋转角度。头戴显示装置100还可以通过识别追踪平面标记物体300,确定虚拟角色相对用户的位置和旋转角度,并通过显示装置120和光学组件130向用户呈现该虚拟角色,其中构建该虚拟角色的数据可以为预先存储的数据,也可以是从云端实时下载的数据。
在一个实施例中,头戴显示装置100根据平面标记物体300和图像采集装置110的位置关系,可确定该平面标记物体300离佩戴头戴显示装置100的用户的远近。头戴显示装置100通过图像采集装置110和处理器140识别跟踪平面标记物体300,处理器140可获取该平面标记物体300中标记物的身份信息,及平面标记物体300与图像采集装置110之间的位置及旋转关系。处理器140可将平面标记物体300的坐标系作为参考坐标系,并根据该参考坐标系确定虚拟场景在头戴显示装置100中显示的位置。
平面标记物体300可包括装置主体(未标出),装置主体上可设有基层302,基层302上可设置一个或多个标记物210,其中,当标记物210为多个时,该多个标记物210分散设置于该基层302上。
其中,该基层302可以由软质材料制成,该基层302也可以由硬质材料制成。当该基层302由软质材料制成时,基层302可以由布制成、也可以由塑料制成等,当该基层302由硬质材料制成时,基层302可以由金属材料制成,也可以由合金材料制成等。在一个实施例中,基层302可以设置有折叠部,以使该基层302具有折叠功能,以便于实现对平面标记物体300的折叠收纳。作为一种实施方式,该平面标记物体300设置有相互垂直的两个折叠部,两个折叠部可将平面标记物体300均分为四个区域,通过两个折叠部将平面标记物体300的四个区域折叠后,可以将平面标记物体300堆叠为一个区域大小。
基层302的形状并没有限制,如可以为圆形、可以为三角形、可以为正方形、可以为长方形、也可以为不规则多边形等。作为一种实施方式,基层302为正方形,基层302的大小可以根据实际的需要进行不同的设定,在此不做限定。
需要指出的是,基层302邻近标记物210的表面可以是平面,也可以是弧形面,还可以是凹凸不平的曲面。
在一个实施例中,一个或多个标记物210层叠设置于基层302上,其中,每一标记物210包括第一标识层以及位于第一标识层上的第二标识层,且第一标识层区别于第二标识层,第二标识层区别于第一标识层以形成子标记物212。作为一种方式,该第一标识层可以由反光材料制成,在本实施例中,以第一标识层为反光层为例进行说明。每个标记物210均包括反光层216和位于反光层216上的多个子标记物212,可以理解的,反光层216的尺寸小于该基层302的尺寸。当反光层216为一个时,该反光层216可以根据需要设置在基层302上的任意位置;当反光层216为多个时,该多个反光层216之间的相对位置不做限定,也同样可以根据需要进行设置。反光层216的形状亦没有限定,如可以为圆形、三角形、正方形、长方形或不规则多边形等。作为一种实施方式,反光层216的形状为正方形。
请参阅图5,作为一种实施方式,平面标记物体300还可包括涂覆层304,其中,涂覆层304由不反光材料制成,例如,涂覆层304可由油墨制成。涂覆层304设置于基层302 和反光层216之间,覆盖基层302。具体地,涂覆层304的形状和尺寸可与基层302一致,可完全覆盖该基层302,此时,一个或多个反光层216设置于涂覆层304上,该基层302、子标记物212、反光层216以及涂覆层304可共同构成平面标记物体300。可以理解,当照明装置150发射的光线照射在平面标记物体300上时,反光层216会将光线反射至图像采集装置110,而子标记物212和涂覆层304不会反射光线,因此,图像采集装置110可以采集到具有标记物210的图像信息。
请参阅图6,作为另一种实施方式,一个或多个涂覆层304设置于基层302上且与一个或多个反光层216位于同层,一个或多个涂覆层304以及一个或多个反光层216共同覆盖基层302,子标记物212对应设置在反光层216上。具体地,当一个或多个反光层216设置于基层302上时,该基层302的部分区域没有被反光层216覆盖而裸露,此时,将基层302的裸露区域通过一个或多个涂覆层304进行覆盖,一方面节省了涂覆层304,另一方面,可以避免基层302可能存在的微弱反光对识别标记物210的准确性的造成影响。
在一个实施例中,为了保证子标记物212的不反光特性,可以通过在反光层216上涂覆油墨以形成子标记物212,也即,子标记物212可以由油墨形成。
在一些实施方式中,为了实现对包括平面标记物体300上的子标记物212的目标图像的获取,也可以采用反光材料制成子标记物212,此时,用油墨覆盖整个基层302,而不采用涂覆层204和反光层216的混合设计,以使子标记物212反光,平面标记物体300上除子标记物212外的其余部分不反光或仅少量反光,实现图像采集装置110对子标记物212的获取。
请参阅图7及图8,在一个实施例中,平面标记物体300还包括滤光层306,滤光层306层叠覆盖于反光层216表面或层叠覆盖于涂覆层304和反光层216表面。滤光层306可位于平面标记物体300的最上层,该滤光层306可以过滤干扰光线,使图像获取设备110获取的图像信息更加准确。在一个实施例中,当照明装置150发射的光线为红外光时,该滤光层306可以采用能够过滤除红外光之外的光线的滤光片;当照明装置150发射的光线为紫外光时,该滤光层306可以采用能够过滤除紫外光之外的滤光片,从而过滤干扰光线,使获取的图像信息更加的准确。
作为一种实施方式,基层302与反光层216层叠设置的一面设置有与反光层216数量相等的凹槽,一个或多个反光层216可对应设置于凹槽中,并且,反光层216的高度可与凹槽的深度相同。此时,每个凹槽和对应设置于该凹槽内的反光层216的尺寸与形状一致,且每个凹槽的深度和对应设置于该凹槽内的反光层216的高度一致,当然,凹槽具体的尺寸、形状以及深度在此不做具体的限定。
作为另一种实施方式,基层302设置有与反光层216数量相等的凸块,该一个或多个反光层216对应设置在凸块上,即该反光层216凸出于基层302设置。
在其他的一些实施例中,该视觉交互装置为曲面标记物体或立体标记结构体时,该视觉装置同样可以具备上述的基层302、标记物210、涂覆层304、滤光层306中的任意多个特征,以利于该视觉交互装置能够被图像采集装置识别/追踪,本说明书不作穷举。
本申请实施例提供的视觉交互装置,应用于识别跟踪系统,识别跟踪系统中的头戴显示装置通过识别追踪视觉交互装置上的标记物,可以获取视觉交互装置与头戴显示装置的之间的相对位置及旋转关系,并根据该相对位置及旋转关系显示虚拟内容,可以提高增强现实或虚拟现实应用中的虚拟图像显示效果。
在一个实施例中,视觉交互装置可以为多面标记结构体。请同时参阅图9及图10,多面标记结构体400具有标记物210以允许被外部的图像采集装置识别并追踪,图像采集装置可以为上述的图像采集装置110。
多面标记结构体400可包括装置主体401以及连接于装置主体401的手柄10。在一些实施方式中,手柄10设有连接部(图中未标出),装置主体401连接于连接部。
装置主体401上设有标记物210,外部的图像采集装置通过采集具有标记物210的图 像,获取多面标记结构体400所搭载的信息,进而获取多面标记结构体400的身份信息及位置姿态信息,实现多面标记结构体400的识别或/及追踪。在一些实施方式中,头戴显示装置100中的处理器140可识别包含标记物210的图像,从而获取多面标记结构体400与头戴显示装置100之间的相对位置及姿态信息等,并根据该相对位置及姿态信息生成显示内容,再通过显示装置120和光学组件130将显示内容显示给用户。
装置主体401的具体形态结构不受限制。具体在图9及图10所示的实施方式中,装置主体401为二十六面体,其包括十八个正方形面以及八个三角形面。
在一个实施例中,装置主体401包括第一表面12以及第二表面14,第二表面14与第一表面12不共面。具体而言,第一表面12的法线方向与第二表面12的法线方向不相同。第一表面12上设置有第一标记物220,第二表面14上设置有区别于第一标记物220的第二标记物230。通过设置第一标记物220及第二标记物230,使图像采集装置110采集包含第一标记物220及第二标记物230中的任一个或两个的图像,处理器140可对图像中包含的标记物进行识别,进而确认对应于第一标记物220及第二标记物230的目标物体为多面标记结构体400,并获取多面标记结构体400的位置姿态信息,以对多面标记结构体400进行识别跟踪。
可以理解的是,第一表面12及第二表面14之间的位置关系不受限制,例如,第一表面12与第二表面14可以相邻设置,或者,第一表面12与第二表面14可以相间隔设置,或者,第一表面12及第二表面14可以为十八个正方形面以及八个三角形面中的任意两个,并不局限于本说明书所描述。
在一些实施方式中,装置主体401还可以包括第三表面、第四表面、第五表面……第二十六表面(图中均未标出)中的任一个或多个,相应地,这些表面上设置有对应的标记物210,多个表面上的标记物210可以各不相同。当用户使用时,通过使装置主体401上的不同的标记物210在头戴显示装置100的视野范围内发生旋转或/及位移,头戴显示装置100可实时识别跟踪多面标记结构体400上的标记物210信息,进而获取多面标记结构体400的位置姿态信息。头戴显示装置100可以根据标记物识别或/及追踪多面标记结构体400,进而完成用户向头戴显示装置100输出信息的工作,使得虚拟世界中生成与多面标记结构体400对应的虚拟角色并完成相应的动作指令。
请同时参阅图11及图12,在一些实施方式中,装置主体401之外设有标识层20,标记物210由标识层20形成。具体而言,标识层20覆盖于装置主体401的外表面,并可在第一表面12、第二表面14、第三表面、第四表面、第五表面……第二十六表面(图中均未标出)中的任一个或多个的位置处形成对应的标记物210。
标识层20包括基层402、反光层404以及图案层406。基层402覆盖于装置主体401的表面,基层402用于承载反光层404,反光层404及图案层406共同形成标记物101。
在一个实施例中,基层402为由布料制成的基布,可以理解,在其他的实施方式中,基层402可以由布制成、也可以由塑料制成等,当基层402由硬质材料制成时,其可以由金属材料制成,也可以由合金等材料制成。甚至,在一些实施方式中,基层402可以省略,而反光层404直接设置于装置主体401上,并与图案层406共同形成标记物210。
在一个实施例中,反光层404设于基层402背离装置主体401的一侧,反光层404用于反射光线,以使标记物210能够准确地被图像采集装置采集。图案层406设在反光层404背离基层402的一侧,图案层406区别于反光层404,并用于与反光层404共同形成标记物210。在一个实施例中,图案层406为采用油墨绘制成的标识图案。图像采集装置采集标记物210时,通过照明装置150向标记物210发出光线,投射到反光层404上的光线会被反光层404反射至图像采集装置,而投射到图案层406上的光线则不会被反射或仅被少量反射,由于被反射的光线的差异,图像采集装置能够顺利地采集到具有标记物210的图像。
可以理解的是,图案层406呈现出的具体图案不受限制,其可以为任意供图像采集装 置获取的图案。例如,图案层406的具体图案可以为以下任意图案中的一种或多种的组合:圆形,三角形、矩形、椭圆形、波浪线、直线、曲线等等,并不局限于本说明书所描述。同样可以理解的是,图案层406可以为由油墨以外的其他材质制成的标识图案,例如,图案层406可以为由塑料、树脂、橡胶等材料制成的标识图案。
同样可以理解的是,标记物210的形成,并不局限于上文描述的反光层404与图案层406相结合的形式,而可以采用其他形式。例如,在一些实施方式中,反光层404与图案层406可以为两种不同颜色的物体,二者组合形成供图像采集装置获取的图案。或者,在一些实施方式中,标记物210可以由图案层406单独形成,此时反光层404可以省略,图案层406可以直接设置于装置主体401上并形成标记物210。又如,标记物210可以为由电子显示的方式形成而不需要设置反光层404以及图案层406,此时,装置主体401的表面为显示屏,通过控制装置主体401的表面显示供图像采集装置获取的图案以形成标记物210。
可以理解的是,在一些实施方式中,基层402可以省略,而反光层404直接设置于装置主体401的表面。
请参阅图13,在一些实施方式中,标识层20还包括滤光层408,滤光层408设置于图案层406上,并覆盖反光层404以及图案层406。滤光层408可以过滤除照明装置射向反光层404的光线之外的光线,避免反光层404在反射光线时受到环境光线的影响,从而使得标记物101更容易被识别。可以理解的是,滤光层408可以设置在反光层404与图案层406之间(如图14所示)。
在一个实施例中,滤光层408的滤光性能可以根据实际需要设置。例如,当多面标记结构体400进入图像采集装置的视场中以被采集时,为了提高识别效率,头戴显示装置通常借助辅助光源进行辅助图像采集装置采集图像,例如,采用红外光源进行辅助时,滤光层408则用于过滤除红外光以外的光(如可见光、紫外光等),使除红外光以外的光无法穿过滤光层408而红外光可穿过并抵达反光层404。当辅助光源将红外光投射至标识层20,滤光层408会过滤除红外光以外的环境光,使仅有红外光到达反光层404并被反光层404反射至近红外的图像采集装置中,从而能够降低环境光线对识别/追踪过程的影响。
此时,滤光层408设置于图案层406上,并覆盖反光层404以及图案层406,由于滤光层408已将过滤除红外光以外的环境光(如可见光、紫外光等),可见光无法穿过滤光层408,则可见光不能到达图案层406,使图案层406在肉眼状态下不可见,则标识层20在肉眼观察下呈现为未包含图案的外观,能够提高多面标记结构体400的外观效果以及科技感。可以理解的是,上述的“红外光”仅为举例说明,其不应成为限制,在实际应用时,可以根据实际需要选择,如紫外光或其他射线,本说明书不作一一赘述。
可以理解的是,装置主体401的形状可以为其他形状,而使装置主体401可以至少包括第一表面12以及第二表面14,第一表面12及第二表面14上均设有对应的标记物101,以便于头戴显示装置能够根据多面标记结构体400上的标记物210对多面标记结构体400进行识别,并获取/追踪姿态。
例如,请参阅图15(a),装置主体401可以设计为正四面体,其包括四个正三角形面,其中,第一表面12与第二表面14相邻接。当然,在其他的实施方式中,第一表面12与第二表面14可以相间隔设置。
又如,请参阅图15(b),装置主体401可以设计为正六面体,其包括六个正方形面,其中,第一表面12与第二表面14相邻接。当然,在其他的实施方式中,第一表面12与第二表面14可以相间隔设置。
再如,请参阅图15(c),装置主体401可以设计为八面体,其包括八个正三角形面,其中,第一表面12与第二表面14相邻接。当然,在其他的实施方式中,第一表面12与第二表面14可以相间隔设置。
又如,请参阅图15(d),装置主体401可以设计为十二面体,其包括十二个正五边形 面,其中,第一表面12与第二表面14相邻接。当然,在其他的实施方式中,第一表面12与第二表面14可以相间隔设置。
装置主体401还可以设计为其他的多面体结构,例如图15(e)至图15(i)所示的多面体结构,本说明书不再一一赘述。应当了解的是,装置主体401为多面体结构,其包括多个面、多条边和多个顶点,当然,球体可以理解为由无数个面形成的多面体。可以理解的是,多面标记结构体还可以为平面与曲面相结合的多面体、曲面与曲面相结合的多面体。
可以理解的是,装置主体401的多面体结构可以看作多个多面体结构相结合而成,例如,图15(c)中的装置主体401可以看作是两个四棱锥相结合而成的多面体结构,又如,图15(e)中的装置主体401可以看作是四个五棱锥相结合而成的多面体结构,再如,图15(g)中的装置主体401可以看作是多个四棱锥相结合而成的多面体结构,又如,图15(h)中的装置主体401可以看作是一个多棱锥与一个多棱台相结合而成的多面体结构,本说明书中不再穷举。总而言之,装置主体的多面体结构,可以为以下结构中的任一种或多种的组合:棱锥、棱柱、棱台、多面体、球体等等。
可以理解的是,视觉交互装置并不限定上述实施方式中的多面标记结构体,可以是任何具有至少两个不共面的标记物210的载体。
本申请实施例提供的上述视觉交互装置包括呈多面体形状的装置主体401,装置主体401至少包括携带第一标记物220的第一表面12,以及携带第二标记物230的第二表面14,其中,第二标记物220区别于第一标记物230。当用户使用时,通过使装置主体401上的不同的标记物210在头戴显示装置的视野范围内发生旋转或/及位移,使得头戴显示装置实时识别跟踪视觉交互装置上的标记物210信息,进而获取视觉交互装置的位置姿态信息,以便头戴显示装置能够根据标记物识别或/及追踪视觉交互装置。同时,通过标记物210使视觉交互装置更易于被图像采集装置采集,能够避免环境光线对识别/追踪带来的不良影响,从而提高视觉交互装置被识别的准确度。
在一个实施例中,视觉交互装置上的每个标记物210可包括多个相互分离的子标记物212,如图16所示,每个标记物210包括的子标记物212的具体个数并不限定,可以根据标记物210的大小范围设定,也可以根据具体识别需求确定。
子标记物212与标记物210的反光层216有一定区分度,例如,子标记物212的颜色与标记物210中反光层216的颜色可以色差较大。例如,子标记物212为黑色,反光层216为白色。涂覆层304的表面与标记物210的反光层216也可以有不同,如图4所示,标记物210的反光层216为白色,涂覆层304表面为黑色。
每个子标记物212包括一个或多个特征点214,每个子标记物212中的各个特征点214相互分离。其中,每个子标记物212所包括的特征点214的数量并不限定,可以根据实际识别需求及标记物210所占区域大小确定。每个特征点214的形状在本申请实施例中并不限定,可以是三角形、四边形等多边形,也可以是圆形。
在一个实施例中,子标记物212可以是空心图形,包括一个或多个空心部分,其中,每个空心部分可作为一个特征点214,如图16中的包括白色圆点214的黑色子标记物212a所示。在另一个实施例中,子标记物212可以是多个相互连通的圆环构成,每个圆环中的空心部分作为该子标记物212中的特征点214,如图2中的(a)所示。
在一个实施例中,在子标记物212的任意一个空心部分,还可以设置实心图形,以该实心图形作为该子标记物212中该空心部分对应的特征点214,如图16中子标记物212b所示。
在一个实施例中,在子标记物212的空心部分,设置的还可以是空心图形,如圆环,以空心部分的空心图形作为子标记物212中对应的一个特征点214。在本申请实施例中,可以以此类推,在子标记物中设置层层嵌套的空心图形,如层层嵌套的圆环,以最后被嵌套的空心圆形作为特征点214。其中,子标记物212中空心图形的嵌套层数可以根据实际识别需求设定,或根据图像采集装置的分辨率确定,在本申请实施例中并不限定。
在一个实施例中,在标记物210的子标记物212中,可以存在至少一个子标记物212由相互分离的实心图形构成,每个实心图形为一个特征点214。如图9中相互分离的各个黑色实心圆214,可以构成一个子标记物212c,各个黑色实心圆为该子标记物212c中的特征点214。
为便于对识别跟踪系统中的各个标记物210进行区分识别,确定各个标记物210的身份信息,识别跟踪系统中的各个标记物210可彼此不同。在一个实施例中,标记物210不同,可以是,标记物210包含的子标记物212的数量与其他标记物包含的子标记物的数量不同。例如,识别跟踪系统10中包括3个标记物210,该3个标记物210的子标记物212的数量分别为x、y、z,其中,x、y、z可以是大于或等于1的整数,x、y、z彼此不相等。头戴显示装置100中的处理器140通过识别标记物210中包含的子标记物212的数量,确定该标记物210的身份。
在一个实施例中,标记物210不同,可以是标记物210中存在至少一个子标记物212的特征点214的数量与其他标记物210中子标记物212的特征点214数量不同。处理器210可通过识别子标记物212中特征点214的数量,确定该子标记物212对应标记物210的身份。例如,标记物210中有一个子标记物212的特征点214数量为3个,而其他任意一个标记物210中,都不包括标特征点214数量为3个的子标记物212。当处理器140识别到该特征点214数量为3个的子标记物212时,则可确定该子标记物212对应的标记物210的身份,该标记物210即为预设标记物模型中,包含特征点数量为3个的子标记物212对应的标记物210。
在一个实施例中,标记物210不同,可以是标记物210中存在至少一个子标记物212的特征点214的形状与其他标记物210中子标记物212的特征点214的形状不同。处理器210可通过识别子标记物212中特征点214的形状,确定该子标记物212对应标记物210的身份。如标记物210中有一个子标记物212包括的特征点214为实心圆,其他任意一个标记物210中都不包括特征点214形状为实心圆的子标记物212。当处理器140识别到该特征点214为实心圆的子标记物212时,则可确定该子标记物212对应的标记物210的身份,该标记物210即为预设标记物模型中,包含实心圆的子标记物212对应的标记物210。
在一个实施例中,标记物210不同,可以是标记物210中存在至少一个子标记物212中空心图形的嵌套层数与其他其他标记物210的子标记物212的嵌套层数不同。因此,当处理器140识别到具有该嵌套层数的空心图形对应的子标记物212,可以确定该子标记物212对应的标记物210的身份。例如,只有一个标记物210中的一个子标记物212的空心部分设置了一个实心圆点,该实心圆点作为该子标记物212的特征点214。当处理器140识别到在空心部分设置有一个实心圆点的子标记物212时,可以确定该子标记物212对应的标记物210的身份,则该标记物210即为预设标记物模型中在空心部分设置有一个实心圆点的子标记物220对应的标记物210。
在一个实施例中,标记物210的不同,也可以是标记物210对应的数量组合与其他标记物210对应的数量组合不同。其中,每个标记物210中各个子标记物220的特征点221的数量构成该标记物210中的数量组合。以图9为例,该标记物210中包括四个子标记物212,其中,子标记物212a的特征点数量为3,子标记物212b的特征点数量为2,子标记物212c的特征点数量为5,子标记物212d的特征点数量为1,该四个子标记物的特征点214的数量形成了该标记物210中的数量组合。该数量组合可以是将子标记物按一定方向排列的数量组合。例如,将子标记物按顺序针方向排列的数量组合可以是3152,按逆时针方向排列的数量组合可以是3251等,其中,作为数量组合的起点的子标记物可以是任意选取的一个子标记物,也可以选取包含特征点数量最多或最少的子标记物等,在此不作限定。标记物210对应的数量组合也可采用其他方式进行表示,并不限于上述描述的方式。
在一个实施例中,对标记物210进行区分的方式并不限定,可以是上述多种方式中的其中一种方式,也可以是多种方式的任意组合。例如,识别跟踪系统10中,一个标记物 210包括的子标记物212的数量与其他标记物210包括的子标记物212数量不同,当处理器140识别到具有该数量的子标记物212时,可以确定该子标记物212对应的标记物210的身份;另一个标记物210中子标记物212的特征点214数量与其他标记物210中子标记物212的特征点214数量不同,当处理器140识别到具有该特征点214数量的子标记物212时,可以确定该子标记物212对应的标记物210的身份。也可以是其他的组合方式,在此不作限定。
可以理解地,标记物210并不仅限于上述实施例中的样式,也可以是其他形状,例如,标记物210可以是能够分辨的几何图形(如圆形,三角形、矩形、椭圆形、波浪线、直线、曲线等等)、预定图案(如动物头像、常用示意符号例如交通标志等)或者其他的图案等。同样可以理解的是,在其他的一些实施方式中,标记物210可以为条形码、二维码等识别码。
本申请实施例中提供的识别跟踪系统中的视觉交互装置,包括装置主体及设置于装置主体的标记物。各个标记物彼此不同,并且,各个标记物设置于装置主体的不同位置,因此,头戴显示装置可以识别出各个标记物的身份信息,根据各个标记物在装置主体中的设置位置以及标记物当前的位置及姿态确定出视觉交互装置的姿态信息,可使确定的结果更加准确,并且可以不在视觉交互装置设置传感器即可获得视觉交互装置的姿态信息,降低视觉交互装置的成本和功耗。
请参阅图17,在一个实施例中,提供另一种多面标记结构体500,多面标记结构体500与上述实施例中提供的多面标记结构体400大致相同,多面标记结构体500同样包括装置主体501、手柄50,装置主体501上同样地设有第一表面512、第一标记物5121、第二表面514以及第二标记物5141。相对于上述实施例的多面标记结构体400,本实施例中的多面标记结构体500的不同在于:
装置主体501还包括第三表面516,第三表面516设有动态区5161,动态区5161用于根据用户需求动态地显示内容,以使图像采集装置能够从动态区5161获取不同的内容。具体地,用户通过操作手柄50,使动态区5161呈现出不同的内容。
具体在本实施例中,动态区5161为镂空结构,手柄50的一端可转动地收容于装置主体501中,手柄50上的指令区能够通过动态区5161的镂空结构显露于外界,以被图像采集装置采集。具体而言,动态区5161的镂空结构为通孔。进一步地,手柄50收容于装置主体501中的部分设有指令区(图中未示出)以及非指令区(图中未示出),通过外力驱使手柄50相对装置主体501转动,能够使指令区或非指令区现于镂空结构中,使图像采集装置能够从动态区5161获取到多面标记结构体500的指令状态变化,进而获取用户的动作指令,使得虚拟世界中的角色执行对应的动作指令。
在一个实施例中,指令区为反光块,用户通过驱使手柄50相对装置主体501转动,使得动态区5161的镂空结构出现或不出现反光块(即,使指令区或非指令区呈现于镂空结构中),由于反光块在照明装置的辅助下能被图像采集装置采集到,因此可以获取到多面标记结构体500的两种状态,用户通过切换多面标记结构体500的状态即能实现虚拟角色在虚拟世界中对应的操作。
在图17所示的实施方式中,第一表面512及第二表面514为十八个正方形面中的任意两个,第三表面516为八个三角形面中的任意一个。可以理解的是,在另一些实施方式中,第三表面516的数量可以为一个或多个,当第三表面516为多个时,多个第三表面516均为三角形面,而第一表面512及第二表面514为十八个正方形面中的任意两个。
在另一些实施方式中,手柄50设置有操作部502,操作部502用于控制指令区的切换。用户通过操作操作部502,能够改变动态区5161的显示内容,使至少两个指令区呈现于镂空结构中,以允许图像采集装置能够从动态区5161获取包含指令区的图像,从而使得虚拟世界中的角色完成相应的动作指令。可以理解的是,在一些实施方式中,操作部502可以为实体按键、虚拟按键、旋钮、拨钮或其他的触发件等,用户对操作部502的操作可以为: 按压、触摸、转动、拨动或其他的触发动作等,并不局限于本说明书所描述。
总而言之,动态区5161用于根据手柄50的状态显示指令区或非指令区,以允许图像采集装置能够从动态区5161获取指令区多面标记结构体500的状态信息。例如,手柄50处于第一状态时,动态区5161显示指令区,手柄50处于第二状态时,动态区5161显示非指令区。
本申请实施例提供的视觉交互装置中,其装置主体501设有动态区5161,动态区5161用于根据手柄50的状态显示指令区或非指令区。当用户使用时,通过使装置主体501的指令区或非指令区在图像采集装置的视野范围内切换,使得头戴显示装置实时识别跟踪视觉交互装置100上的指令区或非指令区,进而获取视觉交互装置的控制状态信息。同时,通过指令区的设置使视觉交互装置更易于被图像采集装置采集,能够避免环境光线对识别/追踪带来的不良影响,从而提高视觉交互装置被识别的准确度。
请参阅图18,在一个实施例中,提供另一种多面标记结构体600,多面标记结构体600与上述实施例中提供的多面标记结构体500大致相同,多面标记结构体600同样包括装置主体601、手柄60,装置主体601上同样地设有第一表面612、第一标记物6121、第二表面614以及第二标记物6141,装置主体601上同样地设有动态区6161。相对于上述实施例的多面标记结构体500,本实施例中的多面标记结构体600的不同在于:
装置主体601包括第一壳体611及第二壳体613,第一壳体611与第二壳体613的结构大致相同,第一壳体611与第二壳体613相互扣合形成装置主体601的多面体结构。进一步地,第一壳体611以及第二壳体613均设置有动态区6161。
手柄60包括握持件61以及控制件63。握持件61连接于第一壳体611,控制件63连接于握持件61并收容于装置主体601内。通过操作握持件61,能够控制控制件63在装置主体601内转动,使动态区6161内呈现指令区或非指令区。在本实施例中,指令区及非指令区的设置与第二实施方式中的指令区、非指令区的设置基本相同。
控制件63包括连接件631、第一控制部633及第二控制部635。连接件631连接于握持件61,并在装置主体601内沿握持件61的轴线方向延伸设置。第一控制部633设置于连接件631远离握持件61的一端,并对应地收容在第一壳体611形成的收容空间中。第二控制部635连接于第一控制部633背离连接件631的一侧,并对应地收容中第二壳体613形成的收容空间中。第一控制部633以及第二控制部635中的至少一个设置有指令区以及邻近指令区的非指令区,通过驱使控制件63相对装置主体601转动,能够使第一控制部633以及第二控制部635中的至少一个上的指令区呈现于对应的动态区6161中。
在一个实施例中,第一控制部633设有第一指令区6332以及第一非指令区6334,通过外力驱使第一控制部633相对装置主体601转动,能够使第一指令区6332以及第一非指令区6334中的任一个呈现于动态区6161中,使图像采集装置能够从动态区6161获取到多面标记结构体600的指令状态变化,进而获取用户的动作指令,使得虚拟世界中的角色执行对应的动作指令。类似地,第二控制部635设有第二指令区6352以及第二非指令区6354。
在一个实施例中,手柄60设置有操作部62,操作部62连接于第一控制部633以及第二控制部635。用户通过操作操作部62,能够控制第一控制部633以及第二控制部635相对装置主体601转动,从而改变动态区6161的显示内容。可以理解的是,在一些实施方式中,第一控制部633的转动运动与第二控制部635的转动运动二者之间互不影响,或者,第一控制部633以及第二控制部635为联动转动。
在本说明书提供的实施例中,在不冲突的情况下,上述的实施例之间可以相互结合,各实施例的特征之间也可以相互结合,并不以实施例为限定,例如,上述实施例的多面标记结构体400可以包括多面标记结构体500的第三表面516、动态区5161的结构,也可以同时包括多面标记结构体600的装置主体601以及手柄60的结构;又如,上述实施例中的多面标记结构体500可以同时包括多面标记结构体600的装置主体601以及手柄60的结构,等等,本说明书不作穷举。
请参阅图19,图19为一个实施例中多面标记结构体的应用场景图。在图19所示的应用场景中,多面标记结构体400、500、600被应用于一个识别跟踪系统中,并作为该识别跟踪系统的视觉交互装置使用。识别跟踪系统包括图像采集装置1910、头戴显示装置1900以及多面标记结构体400、500、600。
头戴显示装置1900包括控制中心1902以及显示器1904,显示器1904为透反镜片,控制中心1902用于向显示器1904投放图像内容,使用户能够在显示器1904中观察到图像内容。用户在显示器1904中看到图像内容的同时,能够透过显示器1904观察到前方的环境,因而,用户眼睛所获得的图像为图像内容与前方环境叠加后的虚拟现实叠加场景。
图像采集装置1910电连接于头戴显示装置1900,图像采集装置1910用于获取其视场范围内的环境信息。多面标记结构体400、500、600用于供用户手持,并允许用户通过多面标记结构体400、500、600向控制中心1902发出控制所需的信息。
上述的识别跟踪系统在使用时,用户通过变换多面标记结构体400、500、600的姿态,使图像采集装置1910获取多面标记结构体400、500、600上的标记物的图像,并将该标记物的图像传送至控制中心1902,控制中心1902根据该标记物的图像,获取多面标记结构体400、500、600的姿态(包括静止姿态、运动姿态等)并向显示器1904投放相应的内容或执行相应的动作(如图像采集装置1910持续识别并追踪多面标记结构体400、500、600)。例如,用户通过调整多面标记结构体400、500、600的姿态,在虚拟现实场景中与虚拟物品进行互动(如抓取、选择等动作);又如,用户将视觉交互装置100握持为特定姿态,如持剑姿态,图像采集装置1910获取到该特定姿态后,控制中心1902在显示器1904中投射剑的影像,并使剑的影像叠加在多面标记结构体400、500、600的影像上,从而使用户在显示器1904中看到的场景为用户的手持剑的场景。
在一个实施例中,图像采集装置1910设有发射器1912以及图像获取单元1914,发射器1912用于向视觉交互装置100发射光线,发射器1912射出的光线投射到多面标记结构体400、500、600上后会被反射至图像获取单元1914,使图像获取单元1914能够获取多面标记结构体400、500、600上的标记物的图像。
上述的应用场景,仅给出了本申请提供的视觉交互装置的所有应用场景的其中之一,可以理解的是,视觉交互装置还可以应用于其他的场景中,例如,视觉交互装置可以作为机器人的遥控器,应用于机器人的作业场景中;又如,视觉交互装置可以作为人机交互装置,应用于用户与电器、电子设备等的交互场景中,等等,本说明书不作一一赘述。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。
Claims (39)
- 一种视觉交互装置,应用于识别跟踪系统,其特征在于,所述视觉交互装置包括装置主体,所述装置主体表面设有一个或多个标记物,所述标记物用于被所述识别跟踪系统中的图像采集装置采集,以确定所述视觉交互装置的位置及姿态信息。
- 根据权利要求1所述的视觉交互装置,其特征在于,所述一个或多个标记物设置于所述装置主体在同一平面的表面上;或所述一个或多个标记物设置于所述装置主体在不同平面的表面,每个平面对应的表面设置一个或多个标记物;或所述一个或多个标记物设置于所述装置主体在相邻平面的表面交界处。
- 根据权利要求1所述的视觉交互装置,其特征在于,所述一个或多个标记物突出设置在所述装置主体的表面;或所述装置主体的表面开设有与所述标记物数量对应的凹槽,所述一个或多个标记物设置于所述装置主体的表面的对应凹槽内。
- 根据权利要求1所述的视觉交互装置,其特征在于,所述装置主体外设置有基层,所述一个或多个标记物设置于所述基层上,所述标记物包括第一标识层,以及位于所述第一标识层上的第二标识层,所述第一标识层区别于所述第二标识层,所述第二标识层区别于所述第一标识层以形成所述子标记物。
- 根据权利要求4所述的视觉交互装置,其特征在于,所述视觉交互装置还包括涂覆层,所述第一标记层为反光层,所述涂覆层设置于所述基层和所述反光层之间,并覆盖所述基层;或所述视觉交互装置还包括涂覆层和滤光层,所述第一标识层为反光层,所述涂覆层设置于所述基层和所述反光层之间,并覆盖所述基层,所述滤光层覆盖于所述反光层表面。
- 根据权利要求4所述的视觉交互装置,其特征在于,所述视觉交互装置还包括一个或多个涂覆层,所述第一标识层为反光层,所述一个或多个涂覆层均设置于所述基层上,且与一个或多个反光层位于同层,所述一个或多个涂覆层和所述一个或多个反光层共同覆盖所述基层;或所述视觉交互装置还包括滤光层、一个或多个涂覆层,所述第一标识层为反光层,所述一个或多个涂覆层均设置于所述基层上,且与一个或多个反光层位于同层,所述一个或多个涂覆层和所述一个或多个反光层共同覆盖所述基层;所述滤光层覆盖于所述涂覆层和所述反光层表面。
- 根据权利要求1所述的视觉交互装置,其特征在于,所述装置主体上的每个标记物互不相同,所述每个标记物包括多个相互分离的子标记物,每一子标记物内具有一个或多个特征点。
- 根据权利要求7所述的视觉交互装置,其特征在于,每一标记物的子标记物及相应的特征点的数量构成该标记物的数量组合,每一标记物对应的数量组合互不相同。
- 根据权利要求8所述的视觉交互装置,其特征在于,至少一个标记物包括的子标记物的数量与其他标记物包括的子标记物数量不同。
- 根据权利要求8所述的视觉交互装置,其特征在于,至少一个标记物中的至少一个子标记物的特征点数量与其他标记物中子标记物的特征点数量不同。
- 根据权利要求7所述的视觉交互装置,其特征在于,至少一个标记物中的至少一个子标记物的特征点形状与其他标记物中子标记物的特征点形状不同。
- 根据权利要求7所述的视觉交互装置,其特征在于,所述标记物中的至少一个子标记物空心图形,所述空心图形内包括一个或多个空心部分,所述空心部分为所述子标记 物的特征点。
- 根据权利要求7所述的视觉交互装置,其特征在于,所述标记物中的至少一个子标记物为空心图形,所述空心图形包括一个或多个空心部分的,所述子标记物中的至少一个空心部分包括实心图形,不包括实心图形的空心部分为特征点,包括有实心图形的空心部分,其中的实心图形为特征点。
- 根据权利要求7所述的视觉交互装置,其特征在于,所述标记物中的至少一个子标记物由相互连通的圆环构成。
- 根据权利要求7所述的视觉交互装置,其特征在于,所述标记物中的至少一个子标记物由相互分离的实心图形构成,每个实心图形为所述子标记物的一个特征点。
- 根据权利要求1所述的视觉交互装置,其特征在于,所述标记物包括第一标记物以及区别于所述第一标记物的第二标记物;所述装置主体包括第一表面及与第一表面不共面的第二表面,所述第一表面设有所述第一标记物,所述第二表面设有所述第二标记物。
- 根据权利要求16所述的视觉交互装置,其特征在于,所述装置主体为多面体结构,所述标记物设置于所述装置主体的外表面;或者,所述装置主体为多面体结构,所述标记物设置于所述装置主体的外表面的显示屏显示而形成。
- 根据权利要求17所述的视觉交互装置,其特征在于,所述装置主体为二十六面体,所述装置主体包括十八个正方形面以及八个三角形面,所述第一表面及所述第二表面为十八个正方形面以及八个三角形面中的任意两个。
- 根据权利要求17所述的视觉交互装置,其特征在于,所述装置主体为以下结构中的任意一种或多种的组合:棱锥、棱柱、棱台、多面体、球体。
- 根据权利要求16所述的视觉交互装置,其特征在于,所述视觉交互装置还包括与所述装置主体连接的手柄,所述装置主体设有动态区,所述手柄的一端收容于所述装置主体中,所述手柄收容于所述装置主体内的部分设有所述指令区以及非指令区;当所述手柄处于第一状态时,所述动态区显示指令区,当所述手柄处于第二状态时,所述动态区显示非指令区。
- 根据权利要求20所述的视觉交互装置,其特征在于,所述动态区为镂空结构,所述手柄包括握持件以及连接于所述握持件的控制件,所述握持件连接于所述装置主体,所述控制件可转动地收容于所述装置主体内,所述指令区以及所述非指令区设置于所述控制件;所述指令区以及所述非指令区根据所述控制件相对所述装置主体的转动状态呈现于所述镂空结构中。
- 根据权利要求21所述的视觉交互装置,其特征在于,所述装置主体包括第一壳体以及与所述第一壳体扣合的第二壳体,所述握持件连接于所述第一壳体;所述第一壳体及所述第二壳体均设置有所述镂空结构。
- 根据权利要求21所述的视觉交互装置,其特征在于,所述控制件包括连接于所述握持件的连接件、连接于所述连接件的第一控制部及连接于所述第一控制部的第二控制部,所述第一控制部及所述第二控制部上均设有所述指令区以及所述非指令区;或/及,所述手柄上设有操作部,所述操作部连接于所述控制件,所述指令区或所述非指令区根据所述操作部的触发状态,呈现于所述镂空结构中;或/及,所述指令区为反光块。
- 根据权利要求20所述的视觉交互装置,其特征在于,所述装置主体还包括第三表面,所述动态区设置于所述第三表面。
- 一种标记物,应用于识别跟踪系统,其特征在于,所述标记物包括多个相互分离的子标记物,每个子标记物内具有一个或多个特征点,所述标记物被所述识别跟踪系统中的图像采集装置采集,以确定所述标记物的位置及姿态信息。
- 根据权利要求25所述的标记物,其特征在于,所述标记物中的至少一个子标记物 空心图形,所述空心图形内包括一个或多个空心部分,所述空心部分为所述子标记物的特征点。
- 根据权利要求25所述的标记物,其特征在于,所述标记物中的至少一个子标记物为空心图形,所述空心图形包括一个或多个空心部分的,所述子标记物中的至少一个空心部分包括实心图形,不包括实心图形的空心部分为特征点,包括有实心图形的空心部分,其中的实心图形为特征点。
- 根据权利要求25所述的标记物,其特征在于,所述标记物中的至少一个子标记物由相互连通的圆环构成。
- 根据权利要求25所述的标记物,其特征在于,所述标记物中的至少一个子标记物由相互分离的实心图形构成,每个实心图形为所述子标记物的一个特征点。
- 一种视觉交互装置,包括手柄,其特征在于,应用于识别追踪系统,所述视觉交互装置还包括连接于所述手柄的装置主体,所述装置主体设有动态区,所述手柄的一端收容于所述装置主体中,所述手柄收容于所述装置主体内的部分设有所述指令区以及非指令区;当所述手柄处于第一状态时,所述动态区显示指令区,当所述手柄处于第二状态时,所述动态区显示非指令区。
- 如权利要求30所述的视觉交互装置,其特征在于,所述动态区为镂空结构,所述手柄包括握持件以及连接于所述握持件的控制件,所述握持件连接于所述装置主体,所述控制件可转动地收容于所述装置主体内,所述指令区以及所述非指令区设置于所述控制件;所述指令区以及所述非指令区根据所述控制件相对所述装置主体的转动状态呈现于所述镂空结构中。
- 如权利要求31所述的视觉交互装置,其特征在于,所述装置主体包括第一壳体以及与所述第一壳体扣合的第二壳体,所述握持件连接于所述第一壳体;所述第一壳体及所述第二壳体均设置有所述镂空结构。
- 如权利要求31所述的视觉交互装置,其特征在于,所述控制件包括连接于所述握持件的连接件、连接于所述连接件的第一控制部及连接于所述第一控制部的第二控制部,所述第一控制部及所述第二控制部上均设有所述指令区以及所述非指令区;或/及,所述手柄上设有操作部,所述操作部连接于所述控制件,所述指令区或所述非指令区根据所述操作部的触发状态,呈现于所述镂空结构中;或/及,所述指令区为反光块。
- 如权利要求30所述的视觉交互装置,其特征在于,所述装置主体表面设有标记物,所述标记物包括第一标记物以及区别于所述第一标记物的第二标记物;所述装置主体包括第一表面、第二表面及第三表面,所述第一表面设有所述第一标记物,所述第二表面设有所述第二标记物,所述动态区设置于所述第三表面。
- 如权利要求34所述的视觉交互装置,其特征在于,所述装置主体外设置有标识层,所述标识层覆盖所述第一表面及所述第二表面,所述标识层形成所述标记物;所述标记物包括一个或多个子标记物,所述子标记物包括一个或多个特征点,所述特征点由所述反光层、所述图案层中的一个或两个形成。
- 如权利要求35所述的视觉交互装置,其特征在于,所述标识层包括反光层以及图案层,所述图案层设置在所述反光层背离所述装置主体的一侧,所述图案层和所述反光层共同形成所标记物。
- 如权利要求36所述的视觉交互装置,其特征在于,所述图案层为设于所述反光层上的油墨图案,所述油墨图案与所述反光层具有区别,使所述油墨层与所述反光层共同形成供所述图像采集装置识别的所述标记物;或/及,所述装置主体与所述反光层之间设有基础层,所述基础层为由布料制成的基布。
- 如权利要求36所述的视觉交互装置,其特征在于,所述标识层还包括滤光层,所述滤光层设置在所述反光层及所述图案层背离所述装置主体的一侧;或者,所述标识层还包括滤光层,所述滤光层设置在所述反光层及所述图案层之间。
- 如权利要求34所述的视觉交互装置,其特征在于,所述装置主体为多面体结构,所述标记物由设置于所述装置主体的外表面的显示屏显示而形成;或者,所述装置主体为以下结构中的任意一种或多种的组合:棱锥、棱柱、棱台、多面体、球体;或者,所述装置主体为二十六面体,所述装置主体包括十八个正方形面以及八个三角形面,所述第一表面、所述第二表面及所述第三表面为十八个正方形面以及八个三角形面中的任意三个。
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810118719.3A CN110119191A (zh) | 2018-02-06 | 2018-02-06 | 视觉交互装置 |
CN201810119298.6 | 2018-02-06 | ||
CN201810119299.0A CN110119193B (zh) | 2018-02-06 | 2018-02-06 | 视觉交互装置 |
CN201810119299.0 | 2018-02-06 | ||
CN201810119871.3A CN110119195A (zh) | 2018-02-06 | 2018-02-06 | 视觉交互装置及标记物 |
CN201810119871.3 | 2018-02-06 | ||
CN201810119298.6A CN110119192B (zh) | 2018-02-06 | 2018-02-06 | 视觉交互装置 |
CN201810118719.3 | 2018-02-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019153971A1 true WO2019153971A1 (zh) | 2019-08-15 |
Family
ID=67547902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/125598 WO2019153971A1 (zh) | 2018-02-06 | 2018-12-29 | 视觉交互装置及标记物 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019153971A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103135754A (zh) * | 2011-12-02 | 2013-06-05 | 深圳泰山在线科技有限公司 | 交互设备及采用交互设备实现交互的方法 |
CN205176903U (zh) * | 2015-08-24 | 2016-04-20 | 北京蚁视科技有限公司 | 一种可编码的反光点标识物 |
CN205942606U (zh) * | 2016-06-22 | 2017-02-08 | 北京蚁视科技有限公司 | 一种用于虚拟现实的交互控制棒 |
CN106980368A (zh) * | 2017-02-28 | 2017-07-25 | 深圳市未来感知科技有限公司 | 一种基于视觉计算及惯性测量单元的虚拟现实交互设备 |
CN207780718U (zh) * | 2018-02-06 | 2018-08-28 | 广东虚拟现实科技有限公司 | 视觉交互装置 |
CN207909071U (zh) * | 2018-02-06 | 2018-09-25 | 广东虚拟现实科技有限公司 | 视觉交互装置 |
CN208126341U (zh) * | 2018-02-06 | 2018-11-20 | 广东虚拟现实科技有限公司 | 视觉交互装置 |
-
2018
- 2018-12-29 WO PCT/CN2018/125598 patent/WO2019153971A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103135754A (zh) * | 2011-12-02 | 2013-06-05 | 深圳泰山在线科技有限公司 | 交互设备及采用交互设备实现交互的方法 |
CN205176903U (zh) * | 2015-08-24 | 2016-04-20 | 北京蚁视科技有限公司 | 一种可编码的反光点标识物 |
CN205942606U (zh) * | 2016-06-22 | 2017-02-08 | 北京蚁视科技有限公司 | 一种用于虚拟现实的交互控制棒 |
CN106980368A (zh) * | 2017-02-28 | 2017-07-25 | 深圳市未来感知科技有限公司 | 一种基于视觉计算及惯性测量单元的虚拟现实交互设备 |
CN207780718U (zh) * | 2018-02-06 | 2018-08-28 | 广东虚拟现实科技有限公司 | 视觉交互装置 |
CN207909071U (zh) * | 2018-02-06 | 2018-09-25 | 广东虚拟现实科技有限公司 | 视觉交互装置 |
CN208126341U (zh) * | 2018-02-06 | 2018-11-20 | 广东虚拟现实科技有限公司 | 视觉交互装置 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9986228B2 (en) | Trackable glasses system that provides multiple views of a shared display | |
US20160098095A1 (en) | Deriving Input from Six Degrees of Freedom Interfaces | |
JP6257258B2 (ja) | 画像投影システム | |
US7768498B2 (en) | Computer input device tracking six degrees of freedom | |
JP6077016B2 (ja) | おもちゃ片と一緒に用いられる基板組立品 | |
US20120038549A1 (en) | Deriving input from six degrees of freedom interfaces | |
EP3470966B1 (en) | Orientation and/or position estimation system, orientation and/or position estimation method, and orientation and/or position estimation apparatus | |
US10652525B2 (en) | Quad view display system | |
US20120135803A1 (en) | Game device utilizing stereoscopic display, method of providing game, recording medium storing game program, and game system | |
US20110159957A1 (en) | Portable type game device and method for controlling portable type game device | |
EP2281228B1 (en) | Controlling virtual reality | |
CN208126341U (zh) | 视觉交互装置 | |
CN110716685B (zh) | 图像显示方法,图像显示装置、系统及其实体对象 | |
CN110119194A (zh) | 虚拟场景处理方法、装置、交互系统、头戴显示装置、视觉交互装置及计算机可读介质 | |
JP6109347B2 (ja) | 立体物 | |
CN111083463A (zh) | 虚拟内容的显示方法、装置、终端设备及显示系统 | |
CN110119190A (zh) | 定位方法、装置、识别跟踪系统及计算机可读介质 | |
CN110140100A (zh) | 三维增强现实对象用户界面功能 | |
CN207909071U (zh) | 视觉交互装置 | |
WO2019153970A1 (zh) | 头戴显示装置 | |
CN110119192B (zh) | 视觉交互装置 | |
CN209590822U (zh) | 交互装置 | |
WO2019153971A1 (zh) | 视觉交互装置及标记物 | |
CN110119193B (zh) | 视觉交互装置 | |
US11343487B2 (en) | Trackable glasses system for perspective views of a display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18904597 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/12/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18904597 Country of ref document: EP Kind code of ref document: A1 |