WO2018098862A1 - Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus - Google Patents
Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus Download PDFInfo
- Publication number
- WO2018098862A1 WO2018098862A1 PCT/CN2016/111063 CN2016111063W WO2018098862A1 WO 2018098862 A1 WO2018098862 A1 WO 2018098862A1 CN 2016111063 W CN2016111063 W CN 2016111063W WO 2018098862 A1 WO2018098862 A1 WO 2018098862A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- current
- image
- gesture
- virtual reality
- gesture recognition
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012545 processing Methods 0.000 claims description 24
- 230000009471 action Effects 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000007654 immersion Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 238000007500 overflow downdraw method Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to the field of virtual reality device technologies, and in particular, to a gesture recognition method, apparatus, and virtual reality device for a virtual reality device.
- Virtual Reality (VR) virtual reality technology will be a key technology to support a comprehensive and integrated multidimensional information space combining qualitative and quantitative, perceptual knowledge and rational understanding.
- VR virtual reality
- the specific connotation is: comprehensive use of computer graphics systems and various interfaces such as reality and control to provide immersive sensation technology in a three-dimensional environment that can be generated on a computer.
- the immersion of virtual reality devices comes from isolation from the outside world, especially visual and auditory isolation, which causes the brain to be deceived and create a virtual immersion from the real world.
- the human-computer interaction methods of virtual reality devices are mainly language recognition, eye tracking and gesture recognition.
- a gesture recognition method for a virtual reality device including at least two cameras, and the gesture recognition method includes:
- Gesture recognition is performed according to the current mosaic image.
- a gesture recognition apparatus for a virtual reality device comprising at least two cameras, the gesture recognition device comprising:
- a current control module configured to control each of the cameras to collect a current gesture image of the current user
- a current splicing module configured to perform splicing processing on each of the current gesture images to obtain a current spliced image
- a gesture recognition module configured to perform gesture recognition according to the current stitched image.
- a virtual reality device comprising a processor and a memory, the memory for storing instructions for controlling the processor to perform the method according to the first aspect of the present invention Gesture recognition method.
- a virtual reality device comprising:
- a computer readable storage medium storing program code for performing the gesture recognition method according to the first aspect of the invention.
- FIG. 1 is a flow chart of an embodiment of a gesture recognition method for a virtual reality device in accordance with the present invention
- FIG. 2 is a flow chart of another embodiment of a gesture recognition method for a virtual reality device according to the present invention.
- FIG. 3 is a block schematic diagram of an implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention.
- FIG. 4 is a block schematic diagram showing another implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention.
- FIG. 5 is a block schematic diagram of an implementation structure of a virtual reality device according to the present invention.
- FIG. 6 is a left side view showing another embodiment of a virtual reality device according to the present invention.
- Figure 7 is a right side elevational view of another embodiment of a virtual reality device in accordance with the present invention.
- a gesture recognition method for the virtual reality device is provided, wherein the virtual reality device includes at least Two cameras, both of which can be ordinary color cameras; they can also be depth cameras; one can be a color camera and the other is a depth camera.
- FIG. 1 is a flow chart of an embodiment of a gesture recognition method for a virtual reality device in accordance with the present invention.
- the gesture recognition method comprises the following steps:
- Step S110 controlling each camera to collect a current gesture image of the current user.
- the manner in which the camera collects the current gesture image of the current user may be, for example, collected in units of frames.
- Step S120 Perform splicing processing on each current gesture image to obtain a current spliced image.
- Image stitching technology is divided into image registration and image fusion.
- image registration can be performed first.
- the matching points of the images are selected and calibrated, and then all the images are registered to a coordinate system by the affine model. under.
- image registration can also obtain overlapping regions of images captured by two adjacent cameras.
- the image fusion combines the useful information of the registered image into one picture, and at the same time processes the splicing position blur caused by the angle of view, illumination and other factors of the registered picture.
- Image fusion for example, can employ Gaussian pyramid techniques.
- the step S120 may specifically include the following steps:
- step S121 each current gesture image is preprocessed to obtain a corresponding image to be registered.
- the pre-processing is specifically for performing denoising, enhancement, and the like on the acquired current gesture image data, and unifying the data format, image size, and resolution.
- step S122 all the images to be registered are subjected to registration processing to obtain an image to be fused.
- the image registration is specifically the alignment between each image to be registered, and the plurality of images to be registered obtained from different cameras or different times or different angles are optimally matched to obtain an image to be fused.
- Image registration is always relative to multiple images.
- one of the images to be registered is usually taken as the reference for registration, which is called the reference image and the other image to be registered is Search map.
- the general method of image registration is to first select an image sub-block centered on a target point on the reference map, and call it a template for image registration, and then let the template move in an orderly manner on the search graph. Go to a location and compare the template to the corresponding part of the search graph until the registration position is found.
- the two images to be registered for the same target encountered in image registration are often obtained under different conditions, such as different imaging times, different imaging positions, and even different imaging systems, plus each in imaging.
- the effect of noise makes it impossible for the two images to be registered of the same target to be identical, and only to a certain degree of similarity.
- the image stitching algorithm can be generally divided into the following two types: based on the region-related stitching algorithm, the region-based registration method is from the image to be stitched.
- the region-based registration method is from the image to be stitched.
- the area of the same size in the reference image and the area of the same size in the reference image are calculated by least squares or other mathematical methods to calculate the difference of the gray value, and the difference is later determined to determine the overlapping area of the image to be stitched.
- the degree of similarity thereby obtaining the range and position of the overlapping area of the image to be stitched, thereby achieving image stitching.
- a comparison method, a layered comparison method, or a phase correlation method may be employed.
- the feature-based registration method does not directly utilize the pixel values of the image, but derives the features of the image through the pixels, and then performs search matching on the corresponding feature regions of the overlapping portions of the image based on the image features.
- the specific method may be, for example, a ratio matching method or a feature point matching method.
- Feature-based registration methods have two processes: feature extraction and feature registration. Firstly, feature points such as points, lines and regions with obvious changes in gray scale are extracted from the image to be registered to form feature sets. Then, the feature matching algorithm is used to select the feature pairs in which the corresponding relationship exists as much as possible in the feature set corresponding to the image to be registered.
- a series of image segmentation techniques are used for feature extraction and boundary detection. Such as canny operator, Laplacian Gaussian operator, region growth.
- the extracted spatial features have closed boundaries, open boundaries, intersecting lines, and other features.
- the algorithms of feature matching include: cross correlation, distance transformation, dynamic programming, structure matching, chain code correlation and other algorithms.
- step S123 image fusion and boundary smoothing processing are performed on the fused image to obtain a current spliced image.
- the coincident regions of the image to be stitched are fused to obtain a smooth and seamless current stitched image reconstructed by stitching.
- Image fusion is a process of synthesizing multiple images of the same scene obtained by multiple image sensors of different modes or multiple images of the same scene obtained by the same sensor at different times into one stitched image.
- Commonly used fusion methods include HIS fusion method, KL transform fusion method, high-pass filter fusion method, wavelet transform fusion method, pyramid transformation fusion method, spline transformation fusion method and so on.
- Step S130 performing gesture recognition according to the current mosaic image.
- the angle of view of the image captured by the camera is enlarged, the range of hand movement of the user in the process of gesture recognition is expanded, the flexibility of the user is greatly improved, and the immersion of the user using VR is increased.
- the step S130 may specifically include:
- Step S131 extracting a current gesture feature in the current mosaic image
- Step S132 comparing the current gesture feature with the specified gesture feature in the database.
- Step S133 determining a current gesture action according to the comparison result.
- the specified gesture feature may be pre-stored in the database before the virtual reality device is shipped, or may be stored in the database before the current user is used.
- the gesture recognition method further includes:
- the specified gesture features corresponding to the specified gestures in the stitched image are stored in the database.
- the response is corresponding to the function of specifying the left motion of the gesture action, for example, opening an application. Wait.
- the present invention also provides a gesture recognition apparatus for a virtual reality device.
- 3 is a block schematic diagram of an implementation structure of a gesture recognition apparatus for a virtual reality device in accordance with the present invention.
- the gesture recognition apparatus 300 includes a current control module 310, a current splicing module 320, and a gesture recognition module 330.
- the current control module 310 is configured to control each camera to collect a current gesture image of the current user.
- the 320 is configured to perform splicing processing on each current gesture image to obtain a current spliced image.
- the gesture recognition module 330 is configured to perform gesture recognition according to the current spliced image.
- FIG. 4 is a block schematic diagram showing another implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention.
- the current splicing module 320 may further include a pre-processing unit 321, a registration unit 322, and a merging unit 323, which is used for each current gesture image.
- the pre-processing is performed to obtain a corresponding image to be registered.
- the registration unit 322 is configured to perform registration processing on all the images to be registered to obtain an image to be fused.
- the fusion unit 323 is configured to perform image fusion and boundary on the fused image. Smooth processing to get the current stitched image.
- the gesture recognition apparatus 300 may further include a feature extraction unit 331, a comparison unit 332, and an action determination unit 333, which is configured to extract a current gesture feature in the current mosaic image; the comparison unit 332 is configured to The current gesture feature is compared with a specified gesture feature in the database; the action determination unit 333 is configured to determine the current gesture action according to the comparison result.
- the present invention also provides a virtual reality device, in one aspect, as shown in FIG. 5, including a processor 502 and a memory 501 for storing instructions for controlling the processor 502 to operate to perform the above A gesture recognition method for a virtual reality device.
- the virtual reality device 500 further includes an interface device 503, an input device 504, a display device 505, a communication device 506, and the like.
- an interface device 503 an input device 504
- a display device 505, a communication device 506, and the like a plurality of devices are illustrated in FIG. 5, the present invention may relate only to some of the devices, such as processor 501, memory 502, display device 505, and the like.
- the communication device 506 can be wired or wirelessly communicated, for example.
- the above interface device 503 includes, for example, a headphone jack, a USB interface, and the like.
- the input device 504 described above may include, for example, a touch screen, a button, and the like.
- the display device 505 described above is, for example, a liquid crystal display, a touch display, or the like.
- the virtual reality device may be, for example, a virtual reality helmet or a virtual reality glasses or the like.
- the virtual reality device includes at least two cameras 1 and the above-described gesture recognition device 200 for a virtual reality device for capturing a gesture image.
- the virtual reality device may be, for example, a virtual reality helmet or a virtual reality glasses or the like.
- the front cover of the virtual reality device is provided with four first cameras 11, and the four first cameras 11 form a front cover.
- the rectangle is square or square, and the viewing angles between the adjacent first cameras 11 partially overlap;
- the two side covers of the virtual reality device are respectively provided with a second camera 12, and each of the second cameras 12 is adjacent to the adjacent The viewing angles between the first cameras 11 partially overlap.
- the four first cameras 11 can expand the horizontal and vertical angles to increase the user's hand.
- the range of moving downwards and left and right; the other two second cameras 12 can increase the angle of the horizontal or vertical direction, and expand the left and right movement range of the user's hand. In this way, shooting with a super-180 degree angle of view is achieved, and blind spots are avoided.
- the “front cover” is specifically a side away from the user's eyes during the wearing of the virtual reality device, and the “side cover” is specifically a surface other than the “front cover” and the opposite surface of the “front cover”.
- the cameras 1 are both depth cameras. Since the images captured by the depth camera are grayscale images, there is no step of converting the color image into a grayscale image, so that the virtual reality device performs the above.
- the gesture recognition method is faster, and the image captured by the depth camera has less noise.
- the invention can be a system, method and/or computer program product.
- the computer program product can comprise a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement various aspects of the present invention.
- the computer readable storage medium can be a tangible device that can hold and store the instructions used by the instruction execution device.
- the computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) Or flash memory), static random access storage (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, for example, a punch card or a groove in the groove on which the command is stored Structure, and any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory static random access storage
- SRAM static random access storage
- CD-ROM compact disk read only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device for example, a punch card or a groove in the groove on which the command is stored Structure, and any suitable combination of the above.
- a computer readable storage medium as used herein is not to be interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (eg, a light pulse through a fiber optic cable), or through a wire The electrical signal transmitted.
- the computer readable program instructions described herein can be downloaded from a computer readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in each computing/processing device .
- Computer program instructions for performing the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
- the computer readable program instructions can execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server. carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection).
- the customized electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by utilizing state information of computer readable program instructions.
- Computer readable program instructions are executed to implement various aspects of the present invention.
- the computer readable program instructions can be provided to a general purpose computer, a special purpose computer, or a processor of other programmable data processing apparatus to produce a machine such that when executed by a processor of a computer or other programmable data processing apparatus Means for implementing the functions/acts specified in one or more of the blocks of the flowcharts and/or block diagrams.
- the computer readable program instructions can also be stored in a computer readable storage medium that causes the computer, programmable data processing device, and/or other device to operate in a particular manner, such that the computer readable medium storing the instructions includes An article of manufacture that includes instructions for implementing various aspects of the functions/acts recited in one or more of the flowcharts.
- the computer readable program instructions can also be loaded onto a computer, other programmable data processing device, or other device to perform a series of operational steps on a computer, other programmable data processing device or other device to produce a computer-implemented process.
- instructions executed on a computer, other programmable data processing apparatus, or other device implement the functions/acts recited in one or more of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagram can represent a module, a program segment, or a portion of an instruction that includes one or more components for implementing the specified logical functions.
- Executable instructions can also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or function. Or it can be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A gesture recognition method and device for a virtual reality apparatus, and a virtual reality apparatus. The gesture recognition method comprises: controlling each camera to capture a current gesture image of a current user (S110); splicing each current gesture image to obtain a current spliced image (S120); and performing gesture recognition according to the current spliced image (S130). The gesture recognition method can enlarge photographing angles of cameras; images obtained by cameras (11, 12) at different positions are spliced by a splicing module (320) to obtain a spliced image with the photographing angle exceeding a viewing angle of a single camera, and thus, the sense of immersion of the user is improved when using a virtual reality apparatus.
Description
本发明涉及虚拟现实设备技术领域,更具体地,涉及一种用于虚拟现实设备的手势识别方法、装置及虚拟现实设备。The present invention relates to the field of virtual reality device technologies, and in particular, to a gesture recognition method, apparatus, and virtual reality device for a virtual reality device.
虚拟现实(Virtual Reality,简称VR),虚拟现实技术将是支撑一个定性和定量相结合,感性认识和理性认识相结合的综合集成多维信息空间的关键技术。随着网络的速度的提升,基于虚拟现实技术的一个互联网时代正悄然走来,它将极大地改变人们的生产和生活方式。其具体内涵是:综合利用计算机图形系统和各种现实及控制等接口设备,在计算机上生成的、可交互的三维环境中提供沉浸感觉的技术。Virtual Reality (VR), virtual reality technology will be a key technology to support a comprehensive and integrated multidimensional information space combining qualitative and quantitative, perceptual knowledge and rational understanding. As the speed of the Internet increases, an Internet era based on virtual reality technology is quietly coming, which will dramatically change people's production and lifestyle. The specific connotation is: comprehensive use of computer graphics systems and various interfaces such as reality and control to provide immersive sensation technology in a three-dimensional environment that can be generated on a computer.
虚拟现实设备的沉浸感来自于与外界的隔绝,尤其是视觉和听觉的隔绝,使得大脑被欺骗,产生脱离于现实世界的虚拟沉浸感。目前,虚拟现实设备的人机交互的方式主要是语言识别,眼球追踪以及手势识别等。The immersion of virtual reality devices comes from isolation from the outside world, especially visual and auditory isolation, which causes the brain to be deceived and create a virtual immersion from the real world. At present, the human-computer interaction methods of virtual reality devices are mainly language recognition, eye tracking and gesture recognition.
发明内容Summary of the invention
根据本发明的第一方面,提供了一种用于虚拟现实设备的手势识别方法,所述虚拟现实设备包括至少两个摄像头,所述手势识别方法包括:According to a first aspect of the present invention, a gesture recognition method for a virtual reality device is provided, the virtual reality device including at least two cameras, and the gesture recognition method includes:
控制每一所述摄像头采集当前用户的当前手势图像;Controlling each of the cameras to collect a current gesture image of the current user;
将每一所述当前手势图像进行拼接处理,得到当前拼接图像;Performing splicing processing on each of the current gesture images to obtain a current spliced image;
根据所述当前拼接图像进行手势识别。Gesture recognition is performed according to the current mosaic image.
根据本发明的第二方面,提供了一种用于虚拟现实设备的手势识别装置,所述虚拟现实设备包括至少两个摄像头,所述手势识别装置包括:According to a second aspect of the present invention, a gesture recognition apparatus for a virtual reality device is provided, the virtual reality device comprising at least two cameras, the gesture recognition device comprising:
当前控制模块,用于控制每一所述摄像头采集当前用户的当前手势图像;
a current control module, configured to control each of the cameras to collect a current gesture image of the current user;
当前拼接模块,用于将每一所述当前手势图像进行拼接处理,得到当前拼接图像;以及,a current splicing module, configured to perform splicing processing on each of the current gesture images to obtain a current spliced image; and,
手势识别模块,用于根据所述当前拼接图像进行手势识别。a gesture recognition module, configured to perform gesture recognition according to the current stitched image.
根据本发明的第三方面,提供了一种虚拟现实设备,包括处理器和存储器,所述存储器用于存储指令,所述指令用于控制所述处理器执行根据本发明第一方面所述的手势识别方法。According to a third aspect of the present invention, there is provided a virtual reality device comprising a processor and a memory, the memory for storing instructions for controlling the processor to perform the method according to the first aspect of the present invention Gesture recognition method.
根据本发明的第四方面,提供了一种虚拟现实设备,包括:According to a fourth aspect of the present invention, a virtual reality device is provided, comprising:
至少两个设置在不同位置的摄像头,且相邻设置的摄像头的拍摄视角部分重叠;At least two cameras disposed at different positions, and the shooting angles of the cameras disposed adjacent to each other partially overlap;
根据本发明第二方面所述的手势识别装置;a gesture recognition apparatus according to the second aspect of the present invention;
根据本发明的第五方面,提供了一种计算机可读存储介质,其存储有用于执行根据本发明的第一方面所述手势识别方法的程序代码。According to a fifth aspect of the invention, there is provided a computer readable storage medium storing program code for performing the gesture recognition method according to the first aspect of the invention.
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。Other features and advantages of the present invention will become apparent from the Detailed Description of the <RTIgt;
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。The accompanying drawings, which are incorporated in FIG
图1为根据本发明一种用于虚拟现实设备的手势识别方法的一种实施方式的流程图;1 is a flow chart of an embodiment of a gesture recognition method for a virtual reality device in accordance with the present invention;
图2为根据本发明一种用于虚拟现实设备的手势识别方法的另一种实施方式的流程图;2 is a flow chart of another embodiment of a gesture recognition method for a virtual reality device according to the present invention;
图3为根据本发明一种用于虚拟现实设备的手势识别装置的一种实施结构的方框原理图;3 is a block schematic diagram of an implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention;
图4为根据本发明一种用于虚拟现实设备的手势识别装置的另一种实施结构的方框原理图;4 is a block schematic diagram showing another implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention;
图5为根据本发明一种虚拟现实设备的一种实施结构的方框原理图;5 is a block schematic diagram of an implementation structure of a virtual reality device according to the present invention;
图6为根据本发明一种虚拟现实设备的另一种实施结构的左视图;6 is a left side view showing another embodiment of a virtual reality device according to the present invention;
图7为根据本发明一种虚拟现实设备的另一种实施结构的右视图。
Figure 7 is a right side elevational view of another embodiment of a virtual reality device in accordance with the present invention.
附图标记说明:Description of the reference signs:
1-摄像头; 11-第一摄像头;1-camera; 11-first camera;
12-第二摄像头。12-second camera.
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in the embodiments are not intended to limit the scope of the invention unless otherwise specified.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of the at least one exemplary embodiment is merely illustrative and is in no way
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but the techniques, methods and apparatus should be considered as part of the specification, where appropriate.
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all of the examples shown and discussed herein, any specific values are to be construed as illustrative only and not as a limitation. Thus, other examples of the exemplary embodiments may have different values.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that similar reference numerals and letters indicate similar items in the following figures, and therefore, once an item is defined in one figure, it is not required to be further discussed in the subsequent figures.
为了解决现有技术中存在的虚拟现实设备手势识别的范围较窄,无法使用户尽情的沉浸其中的问题,提供了一种用于虚拟现实设备的手势识别方法,其中,该虚拟现实设备包括至少两个摄像头,这两个摄像头可以都是普通的彩色摄像头;也可以都是深度摄像头;还可以一个是彩色摄像头,另一个是深度摄像头。In order to solve the problem that the scope of the virtual reality device gesture recognition existing in the prior art is narrow and cannot be immersed in the user, a gesture recognition method for the virtual reality device is provided, wherein the virtual reality device includes at least Two cameras, both of which can be ordinary color cameras; they can also be depth cameras; one can be a color camera and the other is a depth camera.
图1为根据本发明一种用于虚拟现实设备的手势识别方法的一种实施方式的流程图。1 is a flow chart of an embodiment of a gesture recognition method for a virtual reality device in accordance with the present invention.
根据图1所示,该手势识别方法包括以下步骤:According to FIG. 1, the gesture recognition method comprises the following steps:
步骤S110,控制每一摄像头采集当前用户的当前手势图像。Step S110, controlling each camera to collect a current gesture image of the current user.
具体的,摄像头采集当前用户的当前手势图像的方式例如可以是以帧为单位进行采集的。
Specifically, the manner in which the camera collects the current gesture image of the current user may be, for example, collected in units of frames.
步骤S120,将每一当前手势图像进行拼接处理,得到当前拼接图像。Step S120: Perform splicing processing on each current gesture image to obtain a current spliced image.
图像拼接技术分为图像配准和图像融合。为了将多个图像拼接为一张,可以先进行图像配准,根据Lowe提出的SIFT特征点,用来对图片的匹配点进行选取和标定,之后通过仿射模型将所有图片注册到一个坐标系下。图像配准除了统一坐标系还可得到相邻两个摄像头拍摄图像的重叠区域。图像融合会将配准后图片的有用信息融合到一张图片,同时对配准后图片的视角、光照等因素造成的拼接位置模糊进行处理。图像融合例如可以采用高斯金字塔技术。Image stitching technology is divided into image registration and image fusion. In order to splicing multiple images into one, image registration can be performed first. According to the SIFT feature points proposed by Lowe, the matching points of the images are selected and calibrated, and then all the images are registered to a coordinate system by the affine model. under. In addition to the unified coordinate system, image registration can also obtain overlapping regions of images captured by two adjacent cameras. The image fusion combines the useful information of the registered image into one picture, and at the same time processes the splicing position blur caused by the angle of view, illumination and other factors of the registered picture. Image fusion, for example, can employ Gaussian pyramid techniques.
根据图2所示,该步骤S120具体可以包括以下步骤:According to FIG. 2, the step S120 may specifically include the following steps:
步骤S121,对每一当前手势图像均进行预处理,得到对应的待配准图像。In step S121, each current gesture image is preprocessed to obtain a corresponding image to be registered.
预处理具体为对获取的当前手势图像数据进行去噪、增强等处理,统一数据格式、图像大小和分辨率。The pre-processing is specifically for performing denoising, enhancement, and the like on the acquired current gesture image data, and unifying the data format, image size, and resolution.
步骤S122,将所有待配准图像进行配准处理,得到待融合图像。In step S122, all the images to be registered are subjected to registration processing to obtain an image to be fused.
图像配准具体就是每一待配准图像之间的对齐,对从不同摄像头或者不同时间或者不同角度获得的多幅待配准图像进行最佳匹配,得到待融合图像。The image registration is specifically the alignment between each image to be registered, and the plurality of images to be registered obtained from different cameras or different times or different angles are optimally matched to obtain an image to be fused.
图像配准总是相对于多幅图像来讲的,在实际工作中,通常取其中的一幅待配准图像作为配准的基准,称它为参考图,另一幅待配准图像,为搜索图。图像配准的一般做法是,首先在参考图上选取以某一目标点为中心的图像子块,并称它为图像配准的模板,然后让模板在搜索图上有秩序地移动,每移到一个位置,把模板与搜索图中的对应部分进行相关比较,直到找到配准位置为止。Image registration is always relative to multiple images. In actual work, one of the images to be registered is usually taken as the reference for registration, which is called the reference image and the other image to be registered is Search map. The general method of image registration is to first select an image sub-block centered on a target point on the reference map, and call it a template for image registration, and then let the template move in an orderly manner on the search graph. Go to a location and compare the template to the corresponding part of the search graph until the registration position is found.
图像配准中所遇到的同一目标的两幅待配准图像常常是在不同条件下获得的,如不同的成像时间、不同的成像位置、甚至不同的成像系统等,再加上成像中各种噪声的影响,使同一目标的两幅待配准图像不可能完全相同,只能做到某种程度的相似。The two images to be registered for the same target encountered in image registration are often obtained under different conditions, such as different imaging times, different imaging positions, and even different imaging systems, plus each in imaging. The effect of noise makes it impossible for the two images to be registered of the same target to be identical, and only to a certain degree of similarity.
根据图像匹配方法的不同仁阔,一般可以将图像拼接算法分为以下两个类型:基于区域相关的拼接算法,基于区域的配准方法是从待拼接图像
的灰度值出发,对待配准图像中一块区域与参考图像中的相同尺寸的区域使用最小二乘法或者其它数学方法计算其灰度值的差异,对此差异比较后来判断待拼接图像重叠区域的相似程度,由此得到待拼接图像重叠区域的范围和位置,从而实现图像拼接。也可以通过FFT变换将图像由时域变换到频域,然后再进行配准。对位移量比较大的图像,可以先校正图像的旋转,然后建立两幅图像之间的映射关系。具体可以采用逐一比较法、分层比较法或者是相位相关法。According to the different methods of image matching, the image stitching algorithm can be generally divided into the following two types: based on the region-related stitching algorithm, the region-based registration method is from the image to be stitched.
Starting from the gray value, the area of the same size in the reference image and the area of the same size in the reference image are calculated by least squares or other mathematical methods to calculate the difference of the gray value, and the difference is later determined to determine the overlapping area of the image to be stitched. The degree of similarity, thereby obtaining the range and position of the overlapping area of the image to be stitched, thereby achieving image stitching. It is also possible to transform the image from the time domain to the frequency domain by FFT transformation and then register. For images with a large amount of displacement, the rotation of the image can be corrected first, and then the mapping relationship between the two images can be established. Specifically, a comparison method, a layered comparison method, or a phase correlation method may be employed.
基于特征的配准方法不是直接利用图像的像素值,而是通过像素导出图像的特征,然后以图像特征为标准,对图像重叠部分的对应特征区域进行搜索匹配。具体的方法例如可以为比值匹配法或者特征点匹配法。The feature-based registration method does not directly utilize the pixel values of the image, but derives the features of the image through the pixels, and then performs search matching on the corresponding feature regions of the overlapping portions of the image based on the image features. The specific method may be, for example, a ratio matching method or a feature point matching method.
基于特征的配准方法有两个过程:特征抽取和特征配准。首先从待配准图像中提取灰度变化明显的点、线、区域等特征形成特征集冈。然后在待配准图像对应的特征集中利用特征匹配算法尽可能地将存在对应关系的特征对选择出来。一系列的图像分割技术都被用到特征的抽取和边界检测上。如canny算子、拉普拉斯高斯算子、区域生长。抽取出来的空间特征有闭合的边界、开边界、交叉线以及其他特征。特征匹配的算法有:交叉相关、距离变换、动态编程、结构匹配、链码相关等算法。Feature-based registration methods have two processes: feature extraction and feature registration. Firstly, feature points such as points, lines and regions with obvious changes in gray scale are extracted from the image to be registered to form feature sets. Then, the feature matching algorithm is used to select the feature pairs in which the corresponding relationship exists as much as possible in the feature set corresponding to the image to be registered. A series of image segmentation techniques are used for feature extraction and boundary detection. Such as canny operator, Laplacian Gaussian operator, region growth. The extracted spatial features have closed boundaries, open boundaries, intersecting lines, and other features. The algorithms of feature matching include: cross correlation, distance transformation, dynamic programming, structure matching, chain code correlation and other algorithms.
步骤S123,对待融合图像进图像融合与边界平滑处理,得到当前拼接图像。In step S123, image fusion and boundary smoothing processing are performed on the fused image to obtain a current spliced image.
将待拼接图像的重合区域进行融合得到拼接重构的平滑无缝的当前拼接图像。The coincident regions of the image to be stitched are fused to obtain a smooth and seamless current stitched image reconstructed by stitching.
图像融合是把多个不同模式的图像传感器获得的同一场景的多幅图像或同一传感器在不同时刻获得的同一场景的多幅图像合成为一幅拼接图像的过程。Image fusion is a process of synthesizing multiple images of the same scene obtained by multiple image sensors of different modes or multiple images of the same scene obtained by the same sensor at different times into one stitched image.
图像配准之后,由于图像重叠区域之间差异的存在,如果将图像象素简单叠加,拼接处就会出现明显的拼接缝,因此需要修正待拼接图像拼接缝附近的颜色值,使之平滑过渡,实现无缝合成。After the image registration, due to the existence of the difference between the overlapping areas of the image, if the image pixels are simply superimposed, a clear stitching seam will appear at the stitching, so it is necessary to correct the color value near the stitching seam of the image to be stitched, so that Smooth transition for seamless composition.
常用的融合方法有HIS融合法、KL变换融合法、高通滤波融合法、小波变换融合法、金字塔变换融合法、样条变换融合法等。
Commonly used fusion methods include HIS fusion method, KL transform fusion method, high-pass filter fusion method, wavelet transform fusion method, pyramid transformation fusion method, spline transformation fusion method and so on.
步骤S130,根据该当前拼接图像进行手势识别。Step S130, performing gesture recognition according to the current mosaic image.
这样,就扩大了摄像头采集图像的视角,扩大了用户在进行手势识别过程中的手部移动范围,很好的调高用户使用的灵活性,增加了用户使用VR的沉浸感。In this way, the angle of view of the image captured by the camera is enlarged, the range of hand movement of the user in the process of gesture recognition is expanded, the flexibility of the user is greatly improved, and the immersion of the user using VR is increased.
根据图2所示,该步骤S130具体可以包括:According to FIG. 2, the step S130 may specifically include:
步骤S131,提取当前拼接图像中的当前手势特征;Step S131, extracting a current gesture feature in the current mosaic image;
步骤S132,将当前手势特征与数据库中的指定手势特征进行比对。Step S132, comparing the current gesture feature with the specified gesture feature in the database.
步骤S133,根据比对结果确定当前手势动作。Step S133, determining a current gesture action according to the comparison result.
指定手势特征可以是在虚拟现实设备出厂前预存在数据库中的,也可以是当前用户在使用之前存储在数据库中的,在本发明的一个具体实施例中,该手势识别方法还包括:The specified gesture feature may be pre-stored in the database before the virtual reality device is shipped, or may be stored in the database before the current user is used. In a specific embodiment of the present invention, the gesture recognition method further includes:
控制每一摄像头采集指定用户的指定手势图像;Control each camera to capture a specified gesture image of the specified user;
将每一指定手势图像进行拼接处理,得到参考拼接图像;Performing splicing processing on each specified gesture image to obtain a reference stitching image;
将参考拼接图像中的对应指定手势的指定手势特征存储在数据库中。The specified gesture features corresponding to the specified gestures in the stitched image are stored in the database.
如果当前手势特征与一个指定手势例如可以是左滑的指定手势特征比对成功,即当前手势动作为指定手势动作左滑,则响应对应该指定手势动作左滑的功能,例如是打开某项应用等。If the current gesture feature is successfully matched with a specified gesture, for example, the specified gesture feature of the left slide, that is, the current gesture action is left-sliding for the specified gesture action, the response is corresponding to the function of specifying the left motion of the gesture action, for example, opening an application. Wait.
本发明还提供了一种用于虚拟现实设备的手势识别装置。图3为根据本发明一种用于虚拟现实设备的手势识别装置的一种实施结构的方框原理图。The present invention also provides a gesture recognition apparatus for a virtual reality device. 3 is a block schematic diagram of an implementation structure of a gesture recognition apparatus for a virtual reality device in accordance with the present invention.
根据图3所示,该手势识别装置300包括当前控制模块310、当前拼接模块320和手势识别模块330,该当前控制模块310用于控制每一摄像头采集当前用户的当前手势图像;该当前拼接模块320用于将每一当前手势图像进行拼接处理,得到当前拼接图像;该手势识别模块330用于根据当前拼接图像进行手势识别。According to FIG. 3, the gesture recognition apparatus 300 includes a current control module 310, a current splicing module 320, and a gesture recognition module 330. The current control module 310 is configured to control each camera to collect a current gesture image of the current user. The 320 is configured to perform splicing processing on each current gesture image to obtain a current spliced image. The gesture recognition module 330 is configured to perform gesture recognition according to the current spliced image.
图4为根据本发明一种用于虚拟现实设备的手势识别装置的另一种实施结构的方框原理图。4 is a block schematic diagram showing another implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention.
根据图4所示,该当前拼接模块320还可以包括预处理单元321、配准单元322和融合单元323,该预处理单元321用于对每一当前手势图像
均进行预处理,得到对应的待配准图像;该配准单元322用于将所有待配准图像进行配准处理,得到待融合图像;该融合单元323用于对待融合图像进行图像融合与边界平滑处理,得到当前拼接图像。According to FIG. 4, the current splicing module 320 may further include a pre-processing unit 321, a registration unit 322, and a merging unit 323, which is used for each current gesture image.
The pre-processing is performed to obtain a corresponding image to be registered. The registration unit 322 is configured to perform registration processing on all the images to be registered to obtain an image to be fused. The fusion unit 323 is configured to perform image fusion and boundary on the fused image. Smooth processing to get the current stitched image.
进一步地,该手势识别装置300还可以包括特征提取单元331、比对单元332和动作确定单元333,该特征提取单元331用于提取当前拼接图像中的当前手势特征;该比对单元332用于将当前手势特征与数据库中的指定手势特征进行比对;该动作确定单元333用于根据比对结果确定当前手势动作。Further, the gesture recognition apparatus 300 may further include a feature extraction unit 331, a comparison unit 332, and an action determination unit 333, which is configured to extract a current gesture feature in the current mosaic image; the comparison unit 332 is configured to The current gesture feature is compared with a specified gesture feature in the database; the action determination unit 333 is configured to determine the current gesture action according to the comparison result.
本发明还提供了一种虚拟现实设备,在一方面,如图5所示,包括处理器502和存储器501,该存储器501用于存储指令,该指令用于控制处理器502进行操作以执行上述用于虚拟现实设备的手势识别方法。The present invention also provides a virtual reality device, in one aspect, as shown in FIG. 5, including a processor 502 and a memory 501 for storing instructions for controlling the processor 502 to operate to perform the above A gesture recognition method for a virtual reality device.
除此之外,根据图5所示,该虚拟现实设备500还包括接口装置503、输入装置504、显示装置505、通信装置506等等。尽管在图5中示出了多个装置,但是,本发明可以仅涉及其中的部分装置,例如,处理器501、存储器502、显示装置505等。In addition, as shown in FIG. 5, the virtual reality device 500 further includes an interface device 503, an input device 504, a display device 505, a communication device 506, and the like. Although a plurality of devices are illustrated in FIG. 5, the present invention may relate only to some of the devices, such as processor 501, memory 502, display device 505, and the like.
上述通信装置506例如能够进行有有线或无线通信。The communication device 506 can be wired or wirelessly communicated, for example.
上述接口装置503例如包括耳机插孔、USB接口等。The above interface device 503 includes, for example, a headphone jack, a USB interface, and the like.
上述输入装置504例如可以包括触摸屏、按键等。The input device 504 described above may include, for example, a touch screen, a button, and the like.
上述显示装置505例如是液晶显示屏、触摸显示屏等。The display device 505 described above is, for example, a liquid crystal display, a touch display, or the like.
该虚拟现实设备例如可以是虚拟现实头盔或者是虚拟现实眼镜等。The virtual reality device may be, for example, a virtual reality helmet or a virtual reality glasses or the like.
在另一方面,该虚拟现实设备包括至少两个摄像头1和上述用于虚拟现实设备的手势识别装置200,摄像头1用于采集手势图像。该虚拟现实设备例如可以是虚拟现实头盔或者是虚拟现实眼镜等。In another aspect, the virtual reality device includes at least two cameras 1 and the above-described gesture recognition device 200 for a virtual reality device for capturing a gesture image. The virtual reality device may be, for example, a virtual reality helmet or a virtual reality glasses or the like.
在本发明的一个具体实施例中,如图6、图7所示,该虚拟现实设备的前盖上设置有四个第一摄像头11,且这四个第一摄像头11在前盖上构成一个矩形或者是方形,且相邻的第一摄像头11之间的视角部分重叠;虚拟现实设备相对的两个侧盖上分别设置有一个第二摄像头12,且每个第二摄像头12与相邻的第一摄像头11之间的视角部分重叠。In a specific embodiment of the present invention, as shown in FIG. 6 and FIG. 7, the front cover of the virtual reality device is provided with four first cameras 11, and the four first cameras 11 form a front cover. The rectangle is square or square, and the viewing angles between the adjacent first cameras 11 partially overlap; the two side covers of the virtual reality device are respectively provided with a second camera 12, and each of the second cameras 12 is adjacent to the adjacent The viewing angles between the first cameras 11 partially overlap.
这四个第一摄像头11可以扩大横向和纵向的角度,增大用户手部上
下、左右移动的范围;另外两个第二摄像头12可以增大横向或者纵向的角度,扩大用户手部左右或者上下移动范围。这样,就实现了超180度视角的拍摄,且避免出现拍摄盲区。The four first cameras 11 can expand the horizontal and vertical angles to increase the user's hand.
The range of moving downwards and left and right; the other two second cameras 12 can increase the angle of the horizontal or vertical direction, and expand the left and right movement range of the user's hand. In this way, shooting with a super-180 degree angle of view is achieved, and blind spots are avoided.
上述“前盖”具体为在虚拟现实设备在佩戴过程中远离用户眼睛的一侧,上述“侧盖”具体为除了“前盖”及与“前盖”相对表面之外的其他表面。The “front cover” is specifically a side away from the user's eyes during the wearing of the virtual reality device, and the “side cover” is specifically a surface other than the “front cover” and the opposite surface of the “front cover”.
在本发明的一个具体实施例中,摄像头1均为深度摄像头,由于深度摄像头采集的图像均为灰度图,没有将彩色图像转换为灰度图的步骤,因此,使得该虚拟现实设备执行上述的手势识别方法的速度更快,而且,深度摄像头采集的图像的噪点比较少。In a specific embodiment of the present invention, the cameras 1 are both depth cameras. Since the images captured by the depth camera are grayscale images, there is no step of converting the color image into a grayscale image, so that the virtual reality device performs the above. The gesture recognition method is faster, and the image captured by the depth camera has less noise.
上述各实施例主要重点描述与其他实施例的不同之处,但本领域技术人员应当清楚的是,上述各实施例可以根据需要单独使用或者相互结合使用。The above embodiments mainly focus on the differences from the other embodiments, but it should be apparent to those skilled in the art that the above embodiments may be used alone or in combination with each other as needed.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分相互参见即可,每个实施例重点说明的都是与其他实施例的不同之处,但本领域技术人员应当清楚的是,上述各实施例可以根据需要单独使用或者相互结合使用。另外,对于装置实施例而言,由于其是与方法实施例相对应,所以描述得比较简单,相关之处参见方法实施例的对应部分的说明即可。以上所描述的系统实施例仅仅是示意性的,其中作为分离部件说明的模块可以是或者也可以不是物理上分开的。The various embodiments in the present specification are described in a progressive manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments, but the field It should be clear to the skilled person that the above embodiments can be used individually or in combination with each other as needed. In addition, for the device embodiment, since it corresponds to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the corresponding part of the method embodiment. The system embodiments described above are merely illustrative, and the modules illustrated as separate components may or may not be physically separate.
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The invention can be a system, method and/or computer program product. The computer program product can comprise a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement various aspects of the present invention.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储
器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。The computer readable storage medium can be a tangible device that can hold and store the instructions used by the instruction execution device. The computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) Or flash memory), static random access storage
(SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, for example, a punch card or a groove in the groove on which the command is stored Structure, and any suitable combination of the above. A computer readable storage medium as used herein is not to be interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (eg, a light pulse through a fiber optic cable), or through a wire The electrical signal transmitted.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer readable program instructions described herein can be downloaded from a computer readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in each computing/processing device .
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。Computer program instructions for performing the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages. Source code or object code written in any combination, including object oriented programming languages such as Smalltalk, C++, etc., as well as conventional procedural programming languages such as the "C" language or similar programming languages. The computer readable program instructions can execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server. carried out. In the case of a remote computer, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection). In some embodiments, the customized electronic circuit, such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by utilizing state information of computer readable program instructions. Computer readable program instructions are executed to implement various aspects of the present invention.
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框
图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It should be understood that the flow chart and / or box
Each block of the figures, as well as combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。The computer readable program instructions can be provided to a general purpose computer, a special purpose computer, or a processor of other programmable data processing apparatus to produce a machine such that when executed by a processor of a computer or other programmable data processing apparatus Means for implementing the functions/acts specified in one or more of the blocks of the flowcharts and/or block diagrams. The computer readable program instructions can also be stored in a computer readable storage medium that causes the computer, programmable data processing device, and/or other device to operate in a particular manner, such that the computer readable medium storing the instructions includes An article of manufacture that includes instructions for implementing various aspects of the functions/acts recited in one or more of the flowcharts.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。The computer readable program instructions can also be loaded onto a computer, other programmable data processing device, or other device to perform a series of operational steps on a computer, other programmable data processing device or other device to produce a computer-implemented process. Thus, instructions executed on a computer, other programmable data processing apparatus, or other device implement the functions/acts recited in one or more of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the invention. In this regard, each block in the flowchart or block diagram can represent a module, a program segment, or a portion of an instruction that includes one or more components for implementing the specified logical functions. Executable instructions. In some alternative implementations, the functions noted in the blocks may also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified function or function. Or it can be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范
围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。
The embodiments of the present invention have been described above, and the foregoing description is illustrative, not limiting, and not limited to the disclosed embodiments. Without departing from the scope of the illustrated embodiments
Many modifications and variations will be apparent to those of ordinary skill in the art. The choice of terms used herein is intended to best explain the principles, practical applications, or technical improvements of the techniques in the <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; The scope of the invention is defined by the appended claims.
Claims (11)
- 一种用于虚拟现实设备的手势识别方法,所述虚拟现实设备包括至少两个摄像头,其特征在于,所述手势识别方法包括:A gesture recognition method for a virtual reality device, the virtual reality device comprising at least two cameras, wherein the gesture recognition method comprises:控制每一所述摄像头采集当前用户的当前手势图像;Controlling each of the cameras to collect a current gesture image of the current user;将每一所述当前手势图像进行拼接处理,得到当前拼接图像;Performing splicing processing on each of the current gesture images to obtain a current spliced image;根据所述当前拼接图像进行手势识别。Gesture recognition is performed according to the current mosaic image.
- 根据权利要求1所述的手势识别方法,其特征在于,所述将每一所述当前手势图像进行拼接处理,得到当前拼接图像具体包括:The gesture recognition method according to claim 1, wherein the splicing process of each of the current gesture images to obtain the current spliced image comprises:对每一所述当前手势图像均进行预处理,得到对应的待配准图像;Performing pre-processing on each of the current gesture images to obtain a corresponding image to be registered;将所有所述待配准图像进行配准处理,得到待融合图像;Performing registration processing on all the to-be-registered images to obtain an image to be fused;对所述待融合图像进行图像融合与边界平滑处理,得到所述当前拼接图像。Image fusion and boundary smoothing processing are performed on the image to be fused to obtain the current spliced image.
- 根据权利要求1或2所述的手势识别方法,其特征在于,所述根据所述当前拼接图像进行手势识别具体为:The gesture recognition method according to claim 1 or 2, wherein the gesture recognition according to the current mosaic image is specifically:提取所述当前拼接图像中的当前手势特征;Extracting a current gesture feature in the current mosaic image;将所述当前手势特征与数据库中的指定手势特征进行比对;Comparing the current gesture feature with a specified gesture feature in the database;根据比对结果确定当前手势动作。The current gesture action is determined based on the comparison result.
- 一种用于虚拟现实设备的手势识别装置,所述虚拟现实设备包括至少两个摄像头,其特征在于,所述手势识别装置包括:A gesture recognition device for a virtual reality device, the virtual reality device comprising at least two cameras, wherein the gesture recognition device comprises:当前控制模块,用于控制每一所述摄像头采集当前用户的当前手势图像;a current control module, configured to control each of the cameras to collect a current gesture image of the current user;当前拼接模块,用于将每一所述当前手势图像进行拼接处理,得到当前拼接图像;以及,a current splicing module, configured to perform splicing processing on each of the current gesture images to obtain a current spliced image; and,手势识别模块,用于根据所述当前拼接图像进行手势识别。a gesture recognition module, configured to perform gesture recognition according to the current stitched image.
- 根据权利要求4所述的手势识别装置,其特征在于,所述当前拼接模块具体包括:The gesture recognition apparatus according to claim 4, wherein the current splicing module specifically includes:预处理单元,用于对每一所述当前手势图像均进行预处理,得到对应的待配准图像;a pre-processing unit, configured to perform pre-processing on each of the current gesture images to obtain a corresponding image to be registered;配准单元,用于将所有所述待配准图像进行配准处理,得到待融合图 像;a registration unit, configured to perform registration processing on all the to-be-registered images to obtain a to-be-fused image image;融合单元,用于对所述待融合图像进行图像融合与边界平滑处理,得到所述当前拼接图像。The merging unit is configured to perform image fusion and boundary smoothing processing on the image to be fused to obtain the current spliced image.
- 根据权利要求4或5所述的手势识别装置,其特征在于,所述手势识别模块还包括:The gesture recognition apparatus according to claim 4 or 5, wherein the gesture recognition module further comprises:特征提取单元,用于提取所述当前拼接图像中的当前手势特征;a feature extraction unit, configured to extract a current gesture feature in the current mosaic image;比对单元,用于将所述当前手势特征与数据库中的指定手势特征进行比对;a comparison unit, configured to compare the current gesture feature with a specified gesture feature in a database;动作确定单元,用于根据比对结果确定当前手势动作。And an action determining unit, configured to determine a current gesture action according to the comparison result.
- 一种虚拟现实设备,其特征在于,包括处理器和存储器,所述存储器用于存储指令,所述指令用于控制所述处理器执行所述权利要求1-3中任一项所述的手势识别方法。A virtual reality device, comprising a processor and a memory, the memory for storing instructions for controlling the processor to perform the gesture of any one of claims 1-3 recognition methods.
- 一种虚拟现实设备,其特征在于,包括:A virtual reality device, comprising:至少两个设置在不同位置的摄像头,且相邻设置的摄像头的拍摄视角部分重叠;At least two cameras disposed at different positions, and the shooting angles of the cameras disposed adjacent to each other partially overlap;根据权利要求4-6中任一项所述的手势识别装置;a gesture recognition apparatus according to any one of claims 4-6;
- 根据权利要求8所述的虚拟现实设备,其特征在于,所述虚拟现实设备的前盖上设置有四个第一摄像头,且相邻的第一摄像头之间的视角部分重叠;所述虚拟现实设备相对的两个侧盖上分别设置有一个第二摄像头,且每一所述第二摄像头与相邻的第一摄像头之间的视角部分重叠。The virtual reality device according to claim 8, wherein the front cover of the virtual reality device is provided with four first cameras, and the viewing angles between adjacent first cameras partially overlap; the virtual reality A second camera is disposed on each of the two side covers of the device, and a viewing angle between each of the second cameras and the adjacent first camera partially overlaps.
- 根据权利要求8或9所述的虚拟现实设备,其特征在于,每一所述摄像头均为深度摄像头。A virtual reality device according to claim 8 or 9, wherein each of said cameras is a depth camera.
- 一种计算机可读存储介质,其特征在于,存储有用于执行根据权利要求1-3中任一项所述手势识别方法的程序代码。 A computer readable storage medium characterized by storing program code for performing the gesture recognition method according to any one of claims 1-3.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611073930.5 | 2016-11-29 | ||
CN201611073930.5A CN106598235B (en) | 2016-11-29 | 2016-11-29 | Gesture identification method, device and virtual reality device for virtual reality device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018098862A1 true WO2018098862A1 (en) | 2018-06-07 |
Family
ID=58593921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/111063 WO2018098862A1 (en) | 2016-11-29 | 2016-12-20 | Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106598235B (en) |
WO (1) | WO2018098862A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113141502A (en) * | 2021-03-18 | 2021-07-20 | 青岛小鸟看看科技有限公司 | Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment |
CN113190106A (en) * | 2021-03-16 | 2021-07-30 | 青岛小鸟看看科技有限公司 | Gesture recognition method and device and electronic equipment |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107705278B (en) * | 2017-09-11 | 2021-03-02 | Oppo广东移动通信有限公司 | Dynamic effect adding method and terminal equipment |
CN108228807A (en) * | 2017-12-29 | 2018-06-29 | 上海与德科技有限公司 | A kind of image processing method, system and storage medium |
CN108694383B (en) * | 2018-05-14 | 2024-07-12 | 京东方科技集团股份有限公司 | Gesture recognition device, control method thereof and display device |
CN110989828A (en) * | 2019-10-30 | 2020-04-10 | 广州幻境科技有限公司 | Gesture recognition method based on computer vision and gesture recognition bracelet |
KR102295265B1 (en) * | 2019-11-29 | 2021-08-30 | 주식회사 알파서클 | Apparaturs and method for real-time broardcasting of vr video filmed by several camera |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156859A (en) * | 2011-04-21 | 2011-08-17 | 刘津甦 | Sensing method for gesture and spatial location of hand |
WO2012144666A1 (en) * | 2011-04-19 | 2012-10-26 | Lg Electronics Inc. | Display device and control method therof |
CN204406325U (en) * | 2015-01-09 | 2015-06-17 | 长春大学 | A kind of gesture identifying device |
CN204463032U (en) * | 2014-12-30 | 2015-07-08 | 青岛歌尔声学科技有限公司 | System and the virtual reality helmet of gesture is inputted in a kind of 3D scene |
CN105068649A (en) * | 2015-08-12 | 2015-11-18 | 深圳市埃微信息技术有限公司 | Binocular gesture recognition device and method based on virtual reality helmet |
CN205080498U (en) * | 2015-09-07 | 2016-03-09 | 哈尔滨市一舍科技有限公司 | Mutual equipment of virtual reality with 3D subassembly of making a video recording |
CN105892633A (en) * | 2015-11-18 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Gesture identification method and virtual reality display output device |
CN105892637A (en) * | 2015-11-25 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Gesture identification method and virtual reality display output device |
KR20160121963A (en) * | 2015-04-13 | 2016-10-21 | 주식회사 아이카이스트 | Infrared touch screen system that can be gesture recognition |
CN106125848A (en) * | 2016-08-02 | 2016-11-16 | 宁波智仁进出口有限公司 | A kind of Intelligent worn device |
-
2016
- 2016-11-29 CN CN201611073930.5A patent/CN106598235B/en active Active
- 2016-12-20 WO PCT/CN2016/111063 patent/WO2018098862A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012144666A1 (en) * | 2011-04-19 | 2012-10-26 | Lg Electronics Inc. | Display device and control method therof |
CN102156859A (en) * | 2011-04-21 | 2011-08-17 | 刘津甦 | Sensing method for gesture and spatial location of hand |
CN204463032U (en) * | 2014-12-30 | 2015-07-08 | 青岛歌尔声学科技有限公司 | System and the virtual reality helmet of gesture is inputted in a kind of 3D scene |
CN204406325U (en) * | 2015-01-09 | 2015-06-17 | 长春大学 | A kind of gesture identifying device |
KR20160121963A (en) * | 2015-04-13 | 2016-10-21 | 주식회사 아이카이스트 | Infrared touch screen system that can be gesture recognition |
CN105068649A (en) * | 2015-08-12 | 2015-11-18 | 深圳市埃微信息技术有限公司 | Binocular gesture recognition device and method based on virtual reality helmet |
CN205080498U (en) * | 2015-09-07 | 2016-03-09 | 哈尔滨市一舍科技有限公司 | Mutual equipment of virtual reality with 3D subassembly of making a video recording |
CN105892633A (en) * | 2015-11-18 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Gesture identification method and virtual reality display output device |
CN105892637A (en) * | 2015-11-25 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Gesture identification method and virtual reality display output device |
CN106125848A (en) * | 2016-08-02 | 2016-11-16 | 宁波智仁进出口有限公司 | A kind of Intelligent worn device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113190106A (en) * | 2021-03-16 | 2021-07-30 | 青岛小鸟看看科技有限公司 | Gesture recognition method and device and electronic equipment |
CN113190106B (en) * | 2021-03-16 | 2022-11-22 | 青岛小鸟看看科技有限公司 | Gesture recognition method and device and electronic equipment |
US12118152B2 (en) | 2021-03-16 | 2024-10-15 | Qingdao Pico Technology Co., Ltd. | Method, device for gesture recognition and electronic equipment |
CN113141502A (en) * | 2021-03-18 | 2021-07-20 | 青岛小鸟看看科技有限公司 | Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment |
CN113141502B (en) * | 2021-03-18 | 2022-02-08 | 青岛小鸟看看科技有限公司 | Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106598235A (en) | 2017-04-26 |
CN106598235B (en) | 2019-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018098862A1 (en) | Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus | |
CN108701376B (en) | Recognition-based object segmentation of three-dimensional images | |
US11308347B2 (en) | Method of determining a similarity transformation between first and second coordinates of 3D features | |
JP4692371B2 (en) | Image processing apparatus, image processing method, image processing program, recording medium recording image processing program, and moving object detection system | |
US11051000B2 (en) | Method for calibrating cameras with non-overlapping views | |
CN107409166B (en) | Automatic generation of panning shots | |
US11842514B1 (en) | Determining a pose of an object from rgb-d images | |
JP5538617B2 (en) | Methods and configurations for multi-camera calibration | |
JP5952001B2 (en) | Camera motion estimation method and apparatus using depth information, augmented reality system | |
US9519968B2 (en) | Calibrating visual sensors using homography operators | |
JP6230751B1 (en) | Object detection apparatus and object detection method | |
US9129435B2 (en) | Method for creating 3-D models by stitching multiple partial 3-D models | |
WO2016188010A1 (en) | Motion image compensation method and device, display device | |
WO2016029939A1 (en) | Method and system for determining at least one image feature in at least one image | |
CN109521879B (en) | Interactive projection control method and device, storage medium and electronic equipment | |
US11620730B2 (en) | Method for merging multiple images and post-processing of panorama | |
US20160050372A1 (en) | Systems and methods for depth enhanced and content aware video stabilization | |
US9400924B2 (en) | Object recognition method and object recognition apparatus using the same | |
KR20210010930A (en) | Method, system and computer program for remote control of a display device via head gestures | |
JP6175583B1 (en) | Image processing apparatus, actual dimension display method, and actual dimension display processing program | |
CN118648019A (en) | Advanced temporal low-light filtering with global and local motion compensation | |
JP6388744B1 (en) | Ranging device and ranging method | |
Wang et al. | Depth map restoration and upsampling for kinect v2 based on ir-depth consistency and joint adaptive kernel regression | |
Chen et al. | Screen image segmentation and correction for a computer display | |
Petrou et al. | Super-resolution in practice: the complete pipeline from image capture to super-resolved subimage creation using a novel frame selection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16922798 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16922798 Country of ref document: EP Kind code of ref document: A1 |