WO2017113793A1 - Method and apparatus for determining area of finger in image, and a wrist-type device - Google Patents
Method and apparatus for determining area of finger in image, and a wrist-type device Download PDFInfo
- Publication number
- WO2017113793A1 WO2017113793A1 PCT/CN2016/093225 CN2016093225W WO2017113793A1 WO 2017113793 A1 WO2017113793 A1 WO 2017113793A1 CN 2016093225 W CN2016093225 W CN 2016093225W WO 2017113793 A1 WO2017113793 A1 WO 2017113793A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- finger
- hand
- wrist
- processor
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
Definitions
- the present invention relates to the field of smart wearable devices, and in particular, to a method, device and wrist device for determining an area where a finger is located in an image.
- the integration of wrist smart devices is getting higher and higher, and functions are becoming more and more abundant.
- a large proportion of mobile phone functions can pass smart watches and smart hands.
- Ring implementation greatly simplifies the way users receive and deliver information.
- the wrist smart device is limited to the small-size display screen.
- the user cannot use the touch screen or the button to complete the operation of the related function, which is easy to cause misoperation.
- a smart watch is worn on one hand, it must be operated. Except for wake-up, sleep, etc., the simple operation does not require the other hand to operate. The other complicated operations are completed by the other hand and cannot be used.
- the smart watch is operated independently with one hand, and therefore, the smart watch still has great defects in content display and operation.
- some wrist devices have captured the action of the user's fingers through a camera disposed on the device, and the device is controlled according to the action of the finger.
- the camera of the device is usually disposed inside the wrist, and the finger image is taken toward the user's palm. .
- due to the complexity of the environment in which the user is located it is susceptible to ambient light when identifying specific areas such as fingers or joints in the image. The interference of factors such as background is taken, and thus the accuracy of finding the area where the finger is located in the image is low.
- the technical problem to be solved by the present invention is that the accuracy of finding the area where the finger is located in the hand image is low in the prior art.
- the present invention provides a method for determining an area in which a finger is located in an image, comprising the steps of: acquiring a first image and a second image, wherein the first image is an image taken along a wrist toward a back of the hand.
- the second image is an image taken along the wrist toward the palm of the hand; determining a direction in which the finger is located according to the first image; determining an area in which the finger is located in the second image according to a direction in which the finger is located .
- the determining, according to the first image, a direction in which the finger is located comprises: identifying, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand; The apex of each of the joints fits a straight line; a direction at a predetermined angle with the straight line is taken as a direction in which the finger is located.
- the identifying, in the first image, the apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand comprises: removing the foreground and/or background image from the first image And identifying, in the first image after removing the foreground and/or background image, an outline of a joint joint of at least two of the index finger, the middle finger, the ring finger and the little finger and the back of the hand; and identifying a vertex of the joint according to the curvature of the contour.
- the removing the foreground and/or the background image from the first image comprises: performing color space conversion processing on the first image; performing binarization processing on the first image subjected to color space conversion processing Removing the foreground and/or background image in the first image after binarization.
- the removing the foreground and/or the background image from the first image comprises: acquiring a depth value of each pixel point in the first image; and setting a depth value of the each pixel point with a preset depth range The values are compared to determine a finger image, foreground and/or background image from the first image; the foreground and/or background image is removed.
- the present invention also provides an apparatus for determining an area in which a finger is located in an image, comprising:
- An acquiring unit configured to acquire a first image and a second image, wherein the first image is an image taken along a wrist toward a back of the hand, and the second image is an image taken along a direction of the wrist toward a palm; the direction determining unit And determining, according to the first image, a direction in which the finger is located; and an area determining unit, configured to determine an area where the finger is located in the second image according to a direction in which the finger is located.
- the direction determining unit comprises: a vertex determining unit, configured to identify, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand; the fitting unit, A straight line is fitted using the vertices of the respective joints; an angular direction determining unit for taking a direction at a predetermined angle with the straight line as a direction in which the finger is located.
- the vertex determining unit comprises: a background removing unit for removing foreground and/or background images from the first image; and a contour identifying unit for first after removing the foreground and/or background image
- a vertex recognition unit is configured to recognize the vertex of the joint according to the curvature of the contour.
- the background removing unit includes: a color space converting unit configured to perform color space conversion processing on the first image; and a binarization processing unit configured to perform second color image processing on the first image a value processing process; a background processing unit, configured to remove the foreground and/or background image in the first image after the binarization process.
- the background removing unit includes: a depth value acquiring unit, configured to acquire a depth value of each pixel point in the first image; and an image determining unit, configured to use the depth value of the each pixel point and a preset depth The range values are compared to determine a finger image, foreground and/or background image from the first image; an image removal unit for removing the foreground and/or background image.
- the present invention also provides a wrist device, comprising: a wristband; a first camera device disposed on the wristband; and a second camera device disposed on the wristband and opposite to the first camera device And the second camera lens is directed in the same direction as the first camera device; and the processor is configured to process the hand images collected by the first camera device and the second camera device.
- the first imaging device is configured to capture a first image along the wrist toward the back of the hand
- the second imaging device is configured to capture a second image along the wrist toward the palm of the hand
- the processor determines the finger by using the above method. The area in which the second image is located.
- the wrist device is a smart watch
- the processor is disposed in a dial
- the first camera device and the second camera device are respectively disposed on the dial and the strap
- the camera is disposed on the strap.
- the device is coupled to the processor by a connecting member disposed within the strap.
- the connecting member is a flexible circuit board.
- the present invention also provides another wrist device, comprising: a wristband; a first camera device disposed on the wristband; and a second camera device disposed on the wristband and opposite to the first camera device And the second camera lens is directed in the same direction as the first camera device; the first processor is configured to process the hand image collected by the first camera device; the second processor is configured to use Processing the hand image acquired by the second camera according to the processing result of the first processor.
- the first imaging device is configured to capture a first image along a wrist to the back of the hand; the second imaging device is configured to capture a second image along a direction of the wrist to the palm; the first processor is configured to The first image determines a direction in which the finger is located; the second processor is configured to determine an area in which the finger is located in the second image according to a direction in which the finger is located.
- the wrist device is a smart watch
- the first processor and the second processor are respectively disposed at a dial and a watchband
- the first camera device and the second camera device are respectively disposed at the dial and
- the processor disposed on the strap and the processor disposed in the dial are connected by a connecting member disposed in the strap.
- the connecting member is a flexible circuit board.
- the method and device for determining the area where the finger is located in the image can obtain the direction in which the finger is located by analyzing the image taken along the wrist to the back of the hand, and then according to the direction, the direction along the wrist to the palm of the hand.
- the area where the finger is located can be determined, and then the activity of the finger can be judged in the area, and finally various control operations can be realized according to the activity of the specific part of the finger.
- the wrist device provided by the invention can use the two camera devices to respectively collect the image of the wearer's hand along the wrist of the wearer toward the palm and the back of the hand, and the captured image can display the image of the user's hand, and then the processor can be opponent.
- the image of the part is subjected to analysis and judgment processing, and the gesture image of the user is recognized by the hand image, thereby enabling control of the device.
- FIG. 1 is a schematic structural diagram of a wrist device according to Embodiment 1 of the present invention.
- FIG. 2 is a schematic diagram of a hand image collected by an imaging device of a wrist device according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of a hand image collected by another camera device of a wrist device according to an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of a wrist device according to Embodiment 2 of the present invention.
- FIG. 5 is a flowchart of a method for determining an area where a finger is located in an image according to an embodiment of the present invention
- FIG. 6 is a schematic diagram of a direction in which a finger is determined by a method for determining an area where a finger is located in an image according to an embodiment of the present invention
- FIG. 7 is a schematic diagram of an area where a finger is determined by a method for determining an area where a finger is located in an image according to an embodiment of the present invention
- FIG. 8 is a schematic diagram of a specific scene for determining the direction in which the finger is located
- FIG. 9 is a schematic structural diagram of an apparatus for determining an area where a finger is located in an image according to an embodiment of the present invention.
- connection or integral connection; may be mechanical connection or electrical connection; may be directly connected, may also be indirectly connected through an intermediate medium, or may be internal communication of two components, may be wireless connection, or may be wired connection.
- connection or integral connection; may be mechanical connection or electrical connection; may be directly connected, may also be indirectly connected through an intermediate medium, or may be internal communication of two components, may be wireless connection, or may be wired connection.
- the embodiment of the present invention provides a wrist device.
- the device includes: a wristband 10, a first camera device 11, and a second camera device 12; wherein, the first camera device 11 and the second camera device 12 are both Provided on the wristband 10, the first camera 11 and the second camera 12 are disposed opposite to each other, and the directions in which they are directed are the same.
- the first imaging device 11 can capture the hand image along the wrist of the wearer in the direction of the back of the hand
- the second imaging device 12 can capture the hand image in the direction of the palm of the hand along the wrist of the wearer.
- the camera is set differently, but it is at the corner of the wearer's arm. Degree is relatively fixed.
- the first camera 11 and the second camera 12 disposed in this manner can respectively acquire hand images as shown in FIGS. 2 and 3.
- the processor 13 is connected to the first imaging device 11 and the second imaging device 12 for processing the hand images collected by the first imaging device 11 and the second imaging device 12.
- the processor 13 can perform various processing on the image, such as recognizing a hand motion in the image, controlling the device according to the hand motion reflected by the image, etc., which will be described in detail in the following embodiments.
- the above device can use the two camera devices to collect the image of the wearer's hand along the wrist of the wearer toward the palm and the back of the hand, and the captured image can display the image of the user's hand, and then the processor 13 can analyze the image of the hand. By judging and the like, the gesture image of the user is recognized by the hand image, and the control of the device can be realized.
- the wrist device may be a smart watch.
- the dial When the user wears the watch, the dial is usually located outside the wrist, and the strap surrounds the wrist. Therefore, the first camera 11 can be disposed at the dial, and the second camera 12 can be disposed on the strap.
- the first camera device 11 is disposed here toward the hand, and its angle and direction are just enough to capture the back of the hand; the second camera device 12 is disposed on the strap (for example, near the strap buckle or on the strap buckle) toward the hand, The angle and direction just capture the palm and fingers.
- This structure does not require the user to adjust the position of the camera to make it easier for the user to wear.
- the processor 13 is disposed as a processing core of the smart watch, and is disposed at the dial.
- the connecting component of the second camera 12 and the processor 13 is disposed in the watchband, and the connecting component may be a flexible circuit board.
- the embodiment of the present invention provides a wrist device.
- the device includes: a wristband 20, a first camera device 21, and a second camera device 22; wherein the first camera device 21 and the second camera device 22 are both Provided on the wristband 20, the first camera 21 and the second camera 22 are disposed opposite to each other and the directions in which they are directed are the same.
- the first camera 21 and the second camera 22, which are disposed in this manner, can be collected into the hand images as shown in FIGS. 2 and 3, respectively.
- the first processor 23 is connected to the first camera 21 for processing the hand image collected by the first camera 21;
- the second processor 24 is connected to the second imaging device 22 for processing the hand image acquired by the second imaging device 22 according to the processing result of the first processor 23.
- the first processor 23 and the second processor 24 can perform various processing on the respective received images, for example, identifying a hand motion in the image, controlling the device according to the hand motion reflected by the image, etc., specifically This will be described in detail in the subsequent embodiments.
- the above device can use the two camera devices to collect the image of the wearer's hand along the wrist of the wearer in the direction of the palm and the back of the hand, and the captured image can display the image of the user's hand, and then the two processors can respectively image the hand.
- the processing such as analysis and judgment is performed, and the gesture motion of the user is recognized by the hand image, and the control of the device can be realized.
- the wrist device may be a smart watch, and the first camera device 21 and the second camera device 22 are respectively disposed at the dial and the watchband, and the corresponding processor may be disposed nearby, for example, the first camera device 21 and the first process may be
- the device 23 is disposed at the dial, and the second camera 22 and the second processor 24 are disposed on the strap at a position close to the strap buckle, and the processor disposed on the strap and the processor disposed in the dial are disposed on the table.
- the connecting members in the belt are connected, and the connecting member is preferably a flexible circuit board.
- the first processor 23 and the second processor 24 are separately provided, but are not limited thereto, and the two processors may be set together.
- An embodiment of the present invention provides a method for determining an area where a finger is located in an image, and the method may be performed by the processor 13 in Embodiment 1 or the first processor 23 and the second processor 24 in Embodiment 2, As shown in FIG. 5, the method includes the following steps:
- an image of the back of the wearer's hand is displayed in the image shown in FIG. 2, and characteristic information such as the outline of the back of the hand, the tilt angle of the image content, and the tilt state of the hand joint can be obtained by analyzing and processing the image. Then, the orientation of the wearer's finger can be estimated based on the above feature information. It will be understood by those skilled in the art that the direction in which the finger is located is not necessarily perpendicular to the lower edge of the image. Due to factors such as the wearing state of the user and the state of the hand joint, as indicated by the arrow in FIG. 6, the direction is usually The lower edge of the image forms a certain angle.
- the second imaging device 12 can collect an image containing a finger, so the position of the fingertip can be found in the image shown in FIG. 3 according to the direction estimated in step S2. Specifically, the area can be determined in the image according to a preset length and the above direction. Further, by setting the preset distance, more specific regions such as fingertips and joints can be determined. Taking the fingertip area as an example, according to the above method, an area where the fingertip may be present as shown in FIG. 7 may be determined; or the image may be divided into a plurality of segments according to a preset scale value and the above direction information, and then a segment is determined. Or multiple sections.
- the specific part may be preferentially found in the above determined area, thereby reducing the calculation amount of the recognition operation.
- the direction in which the finger is located can be obtained, and then according to the direction, the image of the finger can be determined in the image taken along the wrist to the palm, and then the In this area, the activity of the finger is judged, and finally various control operations are performed according to the activity of the specific part of the finger.
- step S2 may include:
- a direction at a predetermined angle with the straight line is taken as a direction in which the finger is located. Since the finger is necessarily in the downward direction of the apex of the joint, it can be determined that the direction downward with the straight line is the direction in which the finger is located, wherein the preset angle can be 90 degrees.
- the above preferred solution determines the direction of the finger based on the position of the joint, which has a high accuracy.
- step S21 may include:
- S211 removing the foreground and/or background image from the first image.
- Gaussian filtering of the image is first required to smooth the image noise to reduce the influence of noise on the detection result.
- the Gaussian kernel function is as follows:
- the local maximum is calculated and the corresponding pixel points are retained.
- the pixel points that should be reserved are calculated according to the double threshold, and the boundary tracking is performed for the remaining pixel points to complete the edge extraction.
- the apex of the joint is recognized according to the curvature of the contour, and after the back edge of the hand is obtained, the joint convex portion can be extracted by the shape of the back of the hand.
- the convex edge of the joint has a sudden change in curvature, that is, the line on both sides of the joint convex is relatively uniform, and the convex line is curved to a greater extent and is close to a 180 degree turn.
- the edges of the image are first sampled and the image edge lines are vectorized to form a feature line having length and trend statistics.
- the direction is obtained according to the position between the pixels and the first-order difference.
- the point multiplication results of these vectors are calculated to obtain the angle between the vector lines.
- find all straight straight segments for all edges for example, the average angle is no more than 25 degrees.
- the straight line segments are arranged in order, and the change of the trend of the curved segments between the straight segments is calculated. For example, if the change of the strike is greater than 140 degrees and the distance is greater than a certain threshold, the corresponding joint is determined. The corresponding noise and the repeated result are removed, and it is determined as a convex portion of the joint.
- the foregoing step S211 may include:
- the human skin is composed of a dermis layer and a thin skin layer covering the skin layer, and light is absorbed by melanin in the epidermis layer, and absorption and scattering occur simultaneously in the dermis layer.
- the difference in skin color of different individuals is mainly manifested by the black layer in the epidermis
- the change in brightness caused by the difference in the concentration of the pigment, the optical properties of the dermis layer are basically the same, and the individual skin color of the same race has strong commonality, and is clearly different from most background colors, forming a small and compact in the color space. Clustering.
- skin based detection based on color is feasible.
- the image captured by the camera is an RGB image.
- the overlap between the skin color and the non-skin tone is more, and it is seriously affected by the brightness;
- the HSV color space due to the good separation of hue, color saturation and brightness, There is less overlap with non-skinning points;
- the CbCr subspace in the YCbCr color space the skin color is well concentrated in an ellipse-like range, and the distribution on the Cb and Cr components is also concentrated. Therefore, it is feasible to convert the hand image from the RGB space to the YCbCr color space or the HSV color space.
- step S211 may include:
- the part of the imaging device to be imaged is about 10-15 cm, so the focus point of the camera device can be fixed, only need to ensure the focus within 10-15 cm is clear; at the same time, when lifting When the wrist is operated, other objects (foreground and background) in the imaging range are usually closer or farther from the hand, not within the range of 10-15 cm, so the foreground and background are out of focus, and the algorithm can be easily distinguished by the algorithm of ambiguity. background.
- the foreground and/or background image is removed, wherein the content that is too close to the first camera 11 is the foreground image, and the content that is too far from the first camera 11 is the background image.
- the above preferred solution removes both the foreground and background images based on the depth of field information, retaining only the front of the back of the hand, and then further identifying the joints in the scene, thereby further improving the recognition efficiency.
- An embodiment of the present invention provides a device for determining an area where a finger is located in an image, as shown in FIG.
- the acquiring unit 91 is configured to acquire the first image and the second image, wherein the first image is an image taken along the wrist toward the back of the hand, and the second image is an image taken along the wrist toward the palm direction;
- a direction determining unit 92 configured to determine a direction in which the finger is located according to the first image
- the area determining unit 93 is configured to determine an area where the finger is located in the second image according to a direction in which the finger is located.
- the direction in which the finger is located can be obtained, and then according to the direction, the image of the finger can be determined in the image taken along the wrist to the palm, and then the In this area, the activity of a specific part of the finger is judged, and finally various control operations are performed according to the activity of the specific part of the finger.
- Effect Rate which improves the efficiency of finger activity recognition operations.
- the direction determining unit 92 includes:
- a vertex determining unit configured to identify, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger, and the little finger and the connecting joint of the back of the hand;
- a fitting unit for fitting a straight line with vertices of each of the joints
- An angle direction determining unit for taking a direction at a predetermined angle with the straight line as a direction in which the finger is located.
- the above preferred solution determines the direction of the finger based on the position of the joint, which has a high accuracy.
- the vertex determining unit comprises:
- a background removing unit configured to remove a background image from the first image
- a contour identifying unit configured to identify an outline of a joint joint of each finger and the back of the hand in the first image after the background image is removed;
- a vertex recognition unit for identifying a vertex of the joint according to the curvature of the contour.
- the background removal unit comprises:
- a color space conversion unit configured to perform color space conversion processing on the first image
- a binarization processing unit configured to perform binarization processing on the first image subjected to color space conversion processing
- a background processing unit configured to remove the background image in the first image after the binarization process.
- the above preferred solution can further improve the accuracy of the recognition operation.
- the vertex determining unit comprises:
- An image determining unit configured to determine, according to a depth value of each pixel point in the first image and a preset depth range value, an image, a foreground, and/or a background image of a joint joint of each finger and the back of the hand from the first image ;
- An image removing unit configured to remove the foreground and/or background image
- a contour recognition unit configured to identify an outline of a joint joint of each finger and the back of the hand in the first image after the foreground and/or the background image is removed;
- a vertex recognition unit for identifying a vertex of the joint according to the curvature of the contour.
- the above preferred solution removes both the foreground and background images according to the depth of field information, leaving only the back of the hand
- the front scene can then further identify the joints in the scene, thereby further improving the recognition efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
A method and an apparatus for determining an area of a finger in an image, and a wrist-type device. The method comprises: obtaining a first image and a second image (S1), wherein the first image is an image taken in a direction towards the back of a hand along a wrist, and the second image is an image taken in a direction towards the palm of the hand along the wrist; determining a direction of the finger according to the first image (S2); and determining an area of the finger in the second image according to the direction of the finger (S3).
Description
交叉引用cross reference
本申请要求在2015年12月31日提交中国专利局、申请号为201511031085.0、发明名称为“一种在图像中确定手指所处区域的方法、装置及腕式设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to Chinese Patent Application filed on Dec. 31, 2015, the Chinese Patent Office, Application No. 201511031085.0, entitled "A Method, Apparatus, and Wrist Device for Determining the Area Where a Finger Is Located in an Image" The entire content of which is incorporated herein by reference.
本发明涉及智能穿戴设备领域,具体涉及一种在图像中确定手指所处区域的方法、装置及腕式设备。The present invention relates to the field of smart wearable devices, and in particular, to a method, device and wrist device for determining an area where a finger is located in an image.
随着软硬件相关科技的快速发展,腕式智能装置,如智能手表,智能手环等的集成度越来越高,功能越来越丰富,很大比例的手机功能可以通过智能手表,智能手环实现,大大简化用户接收和传递信息的方法。但和传统智能手机比较,腕式智能装置受限于小尺寸显示屏幕,一方面,使用者在使用时无法很好地利用触屏或者按键完成相关功能的操作,易造成误操作,另一方面,当智能手表佩戴在一只手上时,要对其进行操作,除了唤醒,休眠等简单操作不需要另一只手操作外,其余的较为复杂的操作都有另一只手完成,无法使用单手独立对智能手表进行操作,因此,智能手表在内容显示和操作上仍存在很大的缺陷。With the rapid development of software and hardware related technologies, the integration of wrist smart devices, such as smart watches and smart bracelets, is getting higher and higher, and functions are becoming more and more abundant. A large proportion of mobile phone functions can pass smart watches and smart hands. Ring implementation greatly simplifies the way users receive and deliver information. However, compared with the traditional smart phone, the wrist smart device is limited to the small-size display screen. On the one hand, the user cannot use the touch screen or the button to complete the operation of the related function, which is easy to cause misoperation. When a smart watch is worn on one hand, it must be operated. Except for wake-up, sleep, etc., the simple operation does not require the other hand to operate. The other complicated operations are completed by the other hand and cannot be used. The smart watch is operated independently with one hand, and therefore, the smart watch still has great defects in content display and operation.
目前,已有一些腕式设备通过设置在设备上的摄像头捕捉用户手指的动作,实现根据手指的动作对设备进行控制,该类设备的摄像头通常设置在手腕内侧,向用户手心方向的拍摄手指图像。但是由于用户所处的环境比较复杂,在该图像中识别手指或关节等特定区域时,易受环境光线以及
拍摄背景等因素的干扰,由此在该图像中寻找手指所处区域的准确率较低。At present, some wrist devices have captured the action of the user's fingers through a camera disposed on the device, and the device is controlled according to the action of the finger. The camera of the device is usually disposed inside the wrist, and the finger image is taken toward the user's palm. . However, due to the complexity of the environment in which the user is located, it is susceptible to ambient light when identifying specific areas such as fingers or joints in the image.
The interference of factors such as background is taken, and thus the accuracy of finding the area where the finger is located in the image is low.
发明内容Summary of the invention
因此,本发明要解决的技术问题在于现有技术中在手部图像中寻找手指所处区域的准确率较低。Therefore, the technical problem to be solved by the present invention is that the accuracy of finding the area where the finger is located in the hand image is low in the prior art.
有鉴于此,本发明提供一种在图像中确定手指所处区域的方法,包括如下步骤:获取第一图像和第二图像,其中所述第一图像是沿手腕向手背方向拍摄的图像,所述第二图像是沿所述手腕向手心方向拍摄的图像;根据所述第一图像确定手指所处的方向;根据所述手指所处的方向确定手指在所述第二图像中所处的区域。In view of this, the present invention provides a method for determining an area in which a finger is located in an image, comprising the steps of: acquiring a first image and a second image, wherein the first image is an image taken along a wrist toward a back of the hand. The second image is an image taken along the wrist toward the palm of the hand; determining a direction in which the finger is located according to the first image; determining an area in which the finger is located in the second image according to a direction in which the finger is located .
优选地,所述根据所述第一图像确定手指所处的方向,包括:在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点;利用各个所述关节的顶点拟合直线;将与所述直线成预定角度的方向作为所述手指所处的方向。Preferably, the determining, according to the first image, a direction in which the finger is located comprises: identifying, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand; The apex of each of the joints fits a straight line; a direction at a predetermined angle with the straight line is taken as a direction in which the finger is located.
优选地,所述在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点,包括:从所述第一图像中去除前景和/或背景图像;在去除了前景和/或背景图像后的第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的轮廓;根据所述轮廓的曲率识别出关节的顶点。Preferably, the identifying, in the first image, the apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand comprises: removing the foreground and/or background image from the first image And identifying, in the first image after removing the foreground and/or background image, an outline of a joint joint of at least two of the index finger, the middle finger, the ring finger and the little finger and the back of the hand; and identifying a vertex of the joint according to the curvature of the contour.
优选地,所述从所述第一图像中去除前景和/或背景图像,包括:对所述第一图像进行色彩空间转换处理;对经过色彩空间转换处理后的第一图像进行二值化处理;在经过二值化处理后的第一图像中去除前景和/或背景图像。Preferably, the removing the foreground and/or the background image from the first image comprises: performing color space conversion processing on the first image; performing binarization processing on the first image subjected to color space conversion processing Removing the foreground and/or background image in the first image after binarization.
优选地,所述从所述第一图像中去除前景和/或背景图像,包括:获取所述第一图像中各个像素点的深度值;将所述各个像素点的深度值与预设深度范围值进行比较,以从所述第一图像中中确定手指图像、前景和/或背景图像;去除所述前景和/或背景图像。
Preferably, the removing the foreground and/or the background image from the first image comprises: acquiring a depth value of each pixel point in the first image; and setting a depth value of the each pixel point with a preset depth range The values are compared to determine a finger image, foreground and/or background image from the first image; the foreground and/or background image is removed.
相应地,本发明还提供一种在图像中确定手指所处区域的装置,包括:Accordingly, the present invention also provides an apparatus for determining an area in which a finger is located in an image, comprising:
获取单元,用于获取第一图像和第二图像,其中所述第一图像是沿手腕向手背方向拍摄的图像,所述第二图像是沿所述手腕向手心方向拍摄的图像;方向确定单元,用于根据所述第一图像确定手指所处的方向;区域确定单元,用于根据所述手指所处的方向确定手指在所述第二图像中所处的区域。An acquiring unit, configured to acquire a first image and a second image, wherein the first image is an image taken along a wrist toward a back of the hand, and the second image is an image taken along a direction of the wrist toward a palm; the direction determining unit And determining, according to the first image, a direction in which the finger is located; and an area determining unit, configured to determine an area where the finger is located in the second image according to a direction in which the finger is located.
优选地,所述方向确定单元包括:顶点确定单元,用于在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点;拟合单元,用于利用各个所述关节的顶点拟合直线;角度方向确定单元,用于将与所述直线成预定角度的方向作为所述手指所处的方向。Preferably, the direction determining unit comprises: a vertex determining unit, configured to identify, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand; the fitting unit, A straight line is fitted using the vertices of the respective joints; an angular direction determining unit for taking a direction at a predetermined angle with the straight line as a direction in which the finger is located.
优选地,所述顶点确定单元包括:背景去除单元,用于从所述第一图像中去除前景和/或背景图像;轮廓识别单元,用于在去除了前景和/或背景图像后的第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的轮廓;顶点识别单元,用于根据所述轮廓的曲率识别出关节的顶点。Preferably, the vertex determining unit comprises: a background removing unit for removing foreground and/or background images from the first image; and a contour identifying unit for first after removing the foreground and/or background image An outline of a joint joint of at least two of the index finger, the middle finger, the ring finger and the little finger and the back of the hand is recognized in the image; a vertex recognition unit is configured to recognize the vertex of the joint according to the curvature of the contour.
优选地,所述背景去除单元包括:色彩空间转换单元,用于对所述第一图像进行色彩空间转换处理;二值化处理单元,用于对经过色彩空间转换处理后的第一图像进行二值化处理;背景处理单元,用于在经过二值化处理后的第一图像中去除前景和/或背景图像。Preferably, the background removing unit includes: a color space converting unit configured to perform color space conversion processing on the first image; and a binarization processing unit configured to perform second color image processing on the first image a value processing process; a background processing unit, configured to remove the foreground and/or background image in the first image after the binarization process.
优选地,所述背景去除单元包括:深度值获取单元,用于获取所述第一图像中各个像素点的深度值;图像确定单元,用于将所述各个像素点的深度值与预设深度范围值进行比较,以从所述第一图像中中确定手指图像、前景和/或背景图像;图像去除单元,用于去除所述前景和/或背景图像。Preferably, the background removing unit includes: a depth value acquiring unit, configured to acquire a depth value of each pixel point in the first image; and an image determining unit, configured to use the depth value of the each pixel point and a preset depth The range values are compared to determine a finger image, foreground and/or background image from the first image; an image removal unit for removing the foreground and/or background image.
本发明还提供一种腕式设备,包括:腕带;第一摄像装置,设置在所述腕带上;第二摄像装置,设置在所述腕带上且与所述第一摄像装置相对,并且所述第二摄像装置镜头所指向的方向与所述第一摄像装置相同;处理器,用于对所述第一摄像装置和所述第二摄像装置采集的手部图像进行处理。
The present invention also provides a wrist device, comprising: a wristband; a first camera device disposed on the wristband; and a second camera device disposed on the wristband and opposite to the first camera device And the second camera lens is directed in the same direction as the first camera device; and the processor is configured to process the hand images collected by the first camera device and the second camera device.
优选地,所述第一摄像装置用于沿手腕向手背方向拍摄第一图像,所述第二摄像装置用于沿所述手腕向手心方向拍摄第二图像,所述处理器利用上述方法确定手指在所述第二图像中所处的区域。Preferably, the first imaging device is configured to capture a first image along the wrist toward the back of the hand, the second imaging device is configured to capture a second image along the wrist toward the palm of the hand, and the processor determines the finger by using the above method. The area in which the second image is located.
优选地,所述腕式设备为智能手表,所述处理器设置在表盘内,所述第一摄像装置设置和第二摄像装置分别设置在表盘处和表带上,设置在表带上的摄像装置与所述处理器通过设置在表带内的连接部件连接。Preferably, the wrist device is a smart watch, the processor is disposed in a dial, and the first camera device and the second camera device are respectively disposed on the dial and the strap, and the camera is disposed on the strap. The device is coupled to the processor by a connecting member disposed within the strap.
优选地,所述连接部件为柔性电路板。Preferably, the connecting member is a flexible circuit board.
本发明还提供另一种腕式设备,包括:腕带;第一摄像装置,设置在所述腕带上;第二摄像装置,设置在所述腕带上且与所述第一摄像装置相对,并且所述第二摄像装置镜头所指向的方向与所述第一摄像装置相同;第一处理器,用于对所述第一摄像装置采集的手部图像进行处理;第二处理器,用于根据所述第一处理器的处理结果对所述第二摄像装置采集的手部图像进行处理。The present invention also provides another wrist device, comprising: a wristband; a first camera device disposed on the wristband; and a second camera device disposed on the wristband and opposite to the first camera device And the second camera lens is directed in the same direction as the first camera device; the first processor is configured to process the hand image collected by the first camera device; the second processor is configured to use Processing the hand image acquired by the second camera according to the processing result of the first processor.
优选地,所述第一摄像装置用于沿手腕向手背方向拍摄第一图像;所述第二摄像装置用于沿所述手腕向手心方向拍摄第二图像;所述第一处理器用于根据所述第一图像确定手指所处的方向;所述第二处理器用于根据所述手指所处的方向确定手指在所述第二图像中所处的区域。Preferably, the first imaging device is configured to capture a first image along a wrist to the back of the hand; the second imaging device is configured to capture a second image along a direction of the wrist to the palm; the first processor is configured to The first image determines a direction in which the finger is located; the second processor is configured to determine an area in which the finger is located in the second image according to a direction in which the finger is located.
优选地,所述腕式设备为智能手表,所述第一处理器和第二处理器分别设置在表盘处和表带上,所述第一摄像装置和第二摄像装置分别设置在表盘处和表带上,设置在表带上的处理器与设置在表盘内的处理器通过设置在表带内的连接部件连接。Preferably, the wrist device is a smart watch, the first processor and the second processor are respectively disposed at a dial and a watchband, and the first camera device and the second camera device are respectively disposed at the dial and On the strap, the processor disposed on the strap and the processor disposed in the dial are connected by a connecting member disposed in the strap.
优选地,所述连接部件为柔性电路板。Preferably, the connecting member is a flexible circuit board.
本发明提供的在图像中确定手指所处区域的方法及装置,通过对沿手腕向手背方向拍摄的图像进行分析,可得到手指所处的方向,然后根据该方向在沿手腕向手心方向拍摄的图像中即可确定手指所在的区域,进而可以在该区域中判断手指的活动,最终根据手指特定部位的活动实现各种控制操作。利用该方案识别手指特定部位活动时不需在整张图像中寻找特定部位,只需在确定出的区域中寻找手指特定部位即可,由此该方案可提高
在图像中寻找手指所处区域的效率,从而可提高手指活动识别操作的效率。The method and device for determining the area where the finger is located in the image provided by the present invention can obtain the direction in which the finger is located by analyzing the image taken along the wrist to the back of the hand, and then according to the direction, the direction along the wrist to the palm of the hand. In the image, the area where the finger is located can be determined, and then the activity of the finger can be judged in the area, and finally various control operations can be realized according to the activity of the specific part of the finger. By using the scheme to identify a specific part of the finger, it is not necessary to find a specific part in the entire image, and it is only necessary to find a specific part of the finger in the determined area, thereby improving the scheme.
Finding the efficiency of the area in which the finger is located in the image improves the efficiency of the finger activity recognition operation.
本发明提供的腕式设备可利用其2个摄像装置分别沿佩戴者手腕向手心和手背方向采集佩戴者手部图像,其采集的图像可以展现出用户手部的影像,然后其处理器可对手部图像进行分析判断等处理,通过手部图像识别出用户的手势动作,进而可实现对设备的控制。The wrist device provided by the invention can use the two camera devices to respectively collect the image of the wearer's hand along the wrist of the wearer toward the palm and the back of the hand, and the captured image can display the image of the user's hand, and then the processor can be opponent. The image of the part is subjected to analysis and judgment processing, and the gesture image of the user is recognized by the hand image, thereby enabling control of the device.
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the specific embodiments of the present invention or the technical solutions in the prior art, the drawings to be used in the specific embodiments or the description of the prior art will be briefly described below, and obviously, the attached in the following description The drawings are some embodiments of the present invention, and those skilled in the art can obtain other drawings based on these drawings without any creative work.
图1为本发明实施例1提供的腕式设备的结构示意图;1 is a schematic structural diagram of a wrist device according to Embodiment 1 of the present invention;
图2为本发明实施例提供的腕式设备的一个摄像装置所采集到的手部图像示意图;2 is a schematic diagram of a hand image collected by an imaging device of a wrist device according to an embodiment of the present invention;
图3为本发明实施例提供的腕式设备的另一个摄像装置所采集到的手部图像示意图;3 is a schematic diagram of a hand image collected by another camera device of a wrist device according to an embodiment of the present invention;
图4为本发明实施例2提供的腕式设备的结构示意图;4 is a schematic structural diagram of a wrist device according to Embodiment 2 of the present invention;
图5为本发明实施例提供的在图像中确定手指所处区域的方法的流程图;FIG. 5 is a flowchart of a method for determining an area where a finger is located in an image according to an embodiment of the present invention;
图6为根据本发明实施例提供的在图像中确定手指所处区域的方法所确定的手指所处方向的示意图;6 is a schematic diagram of a direction in which a finger is determined by a method for determining an area where a finger is located in an image according to an embodiment of the present invention;
图7为根据本发明实施例提供的在图像中确定手指所处区域的方法所确定的手指所处区域示意图;7 is a schematic diagram of an area where a finger is determined by a method for determining an area where a finger is located in an image according to an embodiment of the present invention;
图8为一个具体的确定手指所处方向的场景示意图;8 is a schematic diagram of a specific scene for determining the direction in which the finger is located;
图9为本发明实施例提供的在图像中确定手指所处区域的装置的结构示意图。
FIG. 9 is a schematic structural diagram of an apparatus for determining an area where a finger is located in an image according to an embodiment of the present invention.
下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions of the present invention will be clearly and completely described in the following with reference to the accompanying drawings. It is obvious that the described embodiments are a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it is to be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inside", "outside", etc. The orientation or positional relationship of the indications is based on the orientation or positional relationship shown in the drawings, and is merely for the convenience of the description of the invention and the simplified description, rather than indicating or implying that the device or component referred to has a specific orientation, in a specific orientation. The construction and operation are therefore not to be construed as limiting the invention. Moreover, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,还可以是两个元件内部的连通,可以是无线连接,也可以是有线连接。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that the terms "installation", "connected", and "connected" are to be understood broadly, and may be fixed or detachable, for example, unless otherwise explicitly defined and defined. Connection, or integral connection; may be mechanical connection or electrical connection; may be directly connected, may also be indirectly connected through an intermediate medium, or may be internal communication of two components, may be wireless connection, or may be wired connection. The specific meaning of the above terms in the present invention can be understood in a specific case by those skilled in the art.
此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。Further, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not constitute a conflict with each other.
实施例1Example 1
本发明实施例提供一种腕式设备,如图1所示该装置包括:腕带10、第一摄像装置11和第二摄像装置12;其中,第一摄像装置11和第二摄像装置12均设置在腕带10上,第一摄像装置11和第二摄像装置12相对设置,且二者所指向的方向相同。The embodiment of the present invention provides a wrist device. As shown in FIG. 1 , the device includes: a wristband 10, a first camera device 11, and a second camera device 12; wherein, the first camera device 11 and the second camera device 12 are both Provided on the wristband 10, the first camera 11 and the second camera 12 are disposed opposite to each other, and the directions in which they are directed are the same.
用户佩戴后,第一摄像装置11可以沿佩戴者手腕向手背方向拍摄手部图像,第二摄像装置12可以沿佩戴者手腕向手心方向拍摄手部图像。针对不同种类的腕式设备,摄像装置的设置方式不同,但其与佩戴者手臂的角
度是比较固定的。按此方式设置的第一摄像装置11和第二摄像装置12可分别采集到如图2和图3所示的手部图像。After the user wears, the first imaging device 11 can capture the hand image along the wrist of the wearer in the direction of the back of the hand, and the second imaging device 12 can capture the hand image in the direction of the palm of the hand along the wrist of the wearer. For different types of wrist devices, the camera is set differently, but it is at the corner of the wearer's arm.
Degree is relatively fixed. The first camera 11 and the second camera 12 disposed in this manner can respectively acquire hand images as shown in FIGS. 2 and 3.
处理器13,与所述第一摄像装置11和第二摄像装置12连接,用于对所述第一摄像装置11和所述第二摄像装置12采集的手部图像进行处理。处理器13可以对图像进行多种处理,例如在图像中识别手部动作、根据图像反映出的手部动作对设备进行控制等,具体将在后续实施例中进行详细介绍。The processor 13 is connected to the first imaging device 11 and the second imaging device 12 for processing the hand images collected by the first imaging device 11 and the second imaging device 12. The processor 13 can perform various processing on the image, such as recognizing a hand motion in the image, controlling the device according to the hand motion reflected by the image, etc., which will be described in detail in the following embodiments.
上述设备可利用其2个摄像装置分别沿佩戴者手腕向手心和手背方向采集佩戴者手部图像,其采集的图像可以展现出用户手部的影像,然后其处理器13可对手部图像进行分析判断等处理,通过手部图像识别出用户的手势动作,进而可实现对设备的控制。The above device can use the two camera devices to collect the image of the wearer's hand along the wrist of the wearer toward the palm and the back of the hand, and the captured image can display the image of the user's hand, and then the processor 13 can analyze the image of the hand. By judging and the like, the gesture image of the user is recognized by the hand image, and the control of the device can be realized.
上述腕式设备可以是智能手表,用户佩戴手表时,表盘通常位于手腕外侧,表带环绕手腕,因此第一摄像装置11可设置在表盘处,第二摄像装置12可设置在表带上,由此第一摄像装置11设置在此朝向手部,其角度和方向恰好可拍到手背;第二摄像装置12设置在表带上(例如表带扣附近或表带扣上)朝向手部,其角度和方向恰好可拍到手心及手指,此结构不需要用户调整摄像装置的位置,便于用户佩戴。并且,将上述2个摄像头的位置进行调换也是可行的。处理器13作为智能手表的处理核心,可设置在表盘处,第二摄像装置12与处理器13的连接部件设置在表带内,该连接部件可以是柔性电路板。The wrist device may be a smart watch. When the user wears the watch, the dial is usually located outside the wrist, and the strap surrounds the wrist. Therefore, the first camera 11 can be disposed at the dial, and the second camera 12 can be disposed on the strap. The first camera device 11 is disposed here toward the hand, and its angle and direction are just enough to capture the back of the hand; the second camera device 12 is disposed on the strap (for example, near the strap buckle or on the strap buckle) toward the hand, The angle and direction just capture the palm and fingers. This structure does not require the user to adjust the position of the camera to make it easier for the user to wear. Moreover, it is also possible to exchange the positions of the above two cameras. The processor 13 is disposed as a processing core of the smart watch, and is disposed at the dial. The connecting component of the second camera 12 and the processor 13 is disposed in the watchband, and the connecting component may be a flexible circuit board.
实施例2Example 2
本发明实施例提供一种腕式设备,如图4所示该装置包括:腕带20、第一摄像装置21和第二摄像装置22;其中,第一摄像装置21和第二摄像装置22均设置在腕带20上,第一摄像装置21和第二摄像装置22相对设置且二者所指向的方向相同。The embodiment of the present invention provides a wrist device. As shown in FIG. 4, the device includes: a wristband 20, a first camera device 21, and a second camera device 22; wherein the first camera device 21 and the second camera device 22 are both Provided on the wristband 20, the first camera 21 and the second camera 22 are disposed opposite to each other and the directions in which they are directed are the same.
按此方式设置的第一摄像装置21和第二摄像装置22可采分别集到如图2和图3所示的手部图像。
The first camera 21 and the second camera 22, which are disposed in this manner, can be collected into the hand images as shown in FIGS. 2 and 3, respectively.
第一处理器23,与第一摄像装置21连接,用于对第一摄像装置21采集的手部图像进行处理;The first processor 23 is connected to the first camera 21 for processing the hand image collected by the first camera 21;
第二处理器24,与第二摄像装置22连接,用于根据第一处理器23的处理结果对第二摄像装置22采集的手部图像进行处理。The second processor 24 is connected to the second imaging device 22 for processing the hand image acquired by the second imaging device 22 according to the processing result of the first processor 23.
第一处理器23和第二处理器24可以分别对各自接收到的图像进行多种处理,例如在图像中识别手部动作、根据图像反映出的手部动作对设备进行控制等,具体将在后续实施例中进行详细介绍。The first processor 23 and the second processor 24 can perform various processing on the respective received images, for example, identifying a hand motion in the image, controlling the device according to the hand motion reflected by the image, etc., specifically This will be described in detail in the subsequent embodiments.
上述设备可利用其2个摄像装置分别沿佩戴者手腕向手心和手背方向采集佩戴者手部图像,其采集的图像可以展现出用户手部的影像,然后其2个处理器可分别对手部图像进行分析判断等处理,通过手部图像识别出用户的手势动作,进而可实现对设备的控制。The above device can use the two camera devices to collect the image of the wearer's hand along the wrist of the wearer in the direction of the palm and the back of the hand, and the captured image can display the image of the user's hand, and then the two processors can respectively image the hand. The processing such as analysis and judgment is performed, and the gesture motion of the user is recognized by the hand image, and the control of the device can be realized.
上述腕式设备可以是智能手表,第一摄像装置21和第二摄像装置22分别设置在表盘处和表带上,相应的处理器可以就近设置,例如可将第一摄像装置21和第一处理器23设置在表盘处,第二摄像装置22和第二处理器24设置在表带上接近表带扣的位置,设置在表带上的处理器与设置在表盘内的处理器通过设置在表带内的连接部件连接,连接部件优选为柔性电路板。在图4所示的示例中,第一处理器23与第二处理器24分立设置,然而并不限于此,也可以将这两个处理器设置在一起。The wrist device may be a smart watch, and the first camera device 21 and the second camera device 22 are respectively disposed at the dial and the watchband, and the corresponding processor may be disposed nearby, for example, the first camera device 21 and the first process may be The device 23 is disposed at the dial, and the second camera 22 and the second processor 24 are disposed on the strap at a position close to the strap buckle, and the processor disposed on the strap and the processor disposed in the dial are disposed on the table. The connecting members in the belt are connected, and the connecting member is preferably a flexible circuit board. In the example shown in FIG. 4, the first processor 23 and the second processor 24 are separately provided, but are not limited thereto, and the two processors may be set together.
实施例3Example 3
本发明实施例提供一种在图像中确定手指所处区域的方法,该方法可以由实施例1中的处理器13或实施例2中的第一处理器23和第二处理器24来执行,如图5所示,该方法包括如下步骤:An embodiment of the present invention provides a method for determining an area where a finger is located in an image, and the method may be performed by the processor 13 in Embodiment 1 or the first processor 23 and the second processor 24 in Embodiment 2, As shown in FIG. 5, the method includes the following steps:
S1,获取第一图像和第二图像,其中第一图像例如是如图2所示的图像,第二图像例如是如图3所示的图像。S1. Acquire a first image and a second image, wherein the first image is, for example, an image as shown in FIG. 2, and the second image is, for example, an image as shown in FIG.
S2,根据第一图像确定手指所处的方向,需要说明的是,常态下佩戴者的手掌是自然向内弯曲(半握拳状)的,因此第一摄像装置11通常不会采集到手指图像,在此步骤中根据图2所示图像估算出来的只是一个方向
信息。S2, determining the direction in which the finger is located according to the first image. It should be noted that, in the normal state, the palm of the wearer is naturally bent inward (semi-claw shape), so the first camera 11 generally does not collect the finger image. In this step, only one direction is estimated based on the image shown in Figure 2.
information.
具体地,如图2所示的图像中显示有佩戴者手背部的影像,通过对该图像进行分析和处理可得到诸如手背轮廓、图像内容的倾斜角度、手部关节的倾斜状态等特征信息,然后则可以根据上述特征信息来估算佩戴者手指所在的方位。本领域技术人员可以理解,手指所处方向并不一定是垂直于图像下沿的,由于用户佩戴状态以及手部关节的状态等因素的影响,如图6中的箭头所示,该方向通常与图像下沿形成一定的角度。Specifically, an image of the back of the wearer's hand is displayed in the image shown in FIG. 2, and characteristic information such as the outline of the back of the hand, the tilt angle of the image content, and the tilt state of the hand joint can be obtained by analyzing and processing the image. Then, the orientation of the wearer's finger can be estimated based on the above feature information. It will be understood by those skilled in the art that the direction in which the finger is located is not necessarily perpendicular to the lower edge of the image. Due to factors such as the wearing state of the user and the state of the hand joint, as indicated by the arrow in FIG. 6, the direction is usually The lower edge of the image forms a certain angle.
S3,根据手指所处的方向确定手指在第二图像中所处的区域,其中,手指所处区域即为指尖、手指关节所在的区域。常态下第二摄像装置12可以采集到包含有手指的图像,所以则可以按照步骤S2中估算出的方向在图3所示图像中来寻找指尖的位置。具体可以根据一个预设的长度以及上述方向在图像中确定区域。进一步地,通过设置预设距离,可以确定诸如指尖和关节等更具体的区域。以指尖区域为例根据上述方法可以确定出一个如图7所示的可能存在指尖的区域;或者还可以按照预设的比例值和上述方向信息将图像划分成多段,然后判定在其中一段或多段区域。S3, determining an area where the finger is located in the second image according to the direction in which the finger is located, wherein the area where the finger is located is the area where the fingertip and the finger joint are located. Normally, the second imaging device 12 can collect an image containing a finger, so the position of the fingertip can be found in the image shown in FIG. 3 according to the direction estimated in step S2. Specifically, the area can be determined in the image according to a preset length and the above direction. Further, by setting the preset distance, more specific regions such as fingertips and joints can be determined. Taking the fingertip area as an example, according to the above method, an area where the fingertip may be present as shown in FIG. 7 may be determined; or the image may be divided into a plurality of segments according to a preset scale value and the above direction information, and then a segment is determined. Or multiple sections.
对于除上述手指所处区域以外的区域,可以直接忽略,或者赋予较低的权值,在识别手指动作时,可以优先在上述确定的区域内寻找特定部位,由此可以降低识别操作的计算量。For the area other than the area where the finger is located, it may be directly ignored or given a lower weight. When the finger motion is recognized, the specific part may be preferentially found in the above determined area, thereby reducing the calculation amount of the recognition operation. .
根据上述方案,通过对沿手腕向手背方向拍摄的图像进行分析,可得到手指所处的方向,然后根据该方向在沿手腕向手心方向拍摄的图像中即可确定手指所在的区域,进而可以在该区域中判断手指的活动,最终根据手指特定部位的活动实现各种控制操作。利用该方案识别手指特定部位活动时不需在整张图像中寻找特定部位,只需在确定出的区域中寻找手指特定部位即可,由此该方案可提高在图像中寻找手指所处区域的效率,从而可提高手指活动识别操作的效率。According to the above scheme, by analyzing the image taken along the wrist to the back of the hand, the direction in which the finger is located can be obtained, and then according to the direction, the image of the finger can be determined in the image taken along the wrist to the palm, and then the In this area, the activity of the finger is judged, and finally various control operations are performed according to the activity of the specific part of the finger. By using the scheme to identify a specific part of the finger, it is not necessary to find a specific part in the entire image, and it is only necessary to find a specific part of the finger in the determined area, thereby improving the area where the finger is located in the image. Efficiency, which improves the efficiency of finger activity recognition operations.
作为一个优选的实施方式,上述步骤S2可以包括:As a preferred implementation manner, the foregoing step S2 may include:
S21,在第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点,具体可以根据图像中的手部形状特征进行识别,
从图像中寻找具有明显特征的部位的方式有多种,利用现有的特征识别算法都是可行的。由此可以识别出如图8所示的食指、中指、无名指和小拇指这4个关节的顶点,本实施例优选使用4个关节顶点进行后续处理,由此得到的结果最准确。但本发明并不限于使用4个顶点,使用2或3个顶点拟合直线都是可行的。S21, identifying, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger, and the little finger and the connecting joint of the back of the hand, which may be specifically identified according to the shape of the hand in the image.
There are many ways to find a part with obvious features from an image, and it is feasible to use existing feature recognition algorithms. Thereby, the vertices of the four joints of the index finger, the middle finger, the ring finger and the little finger as shown in FIG. 8 can be identified, and in this embodiment, it is preferable to use four joint vertices for subsequent processing, and the result obtained thereby is the most accurate. However, the invention is not limited to the use of 4 vertices, and fitting a straight line with 2 or 3 vertices is possible.
S22,利用各个关节的顶点拟合直线,利用点拟合直线的算法有多种,利用现有算法都是可行的;S22, the straight line is fitted by the vertices of each joint, and there are various algorithms for fitting the straight line by using points, and the existing algorithms are all feasible;
S23,将与所述直线成预定角度的方向作为所述手指所处的方向。由于手指必然处在关节顶点向下的方向上,因此可确定与该直线成一定角度向下的方向即为手指所处的方向,其中预设角度可以是90度。上述优选方案根据关节位置确定手指方向,其准确率较高。S23, a direction at a predetermined angle with the straight line is taken as a direction in which the finger is located. Since the finger is necessarily in the downward direction of the apex of the joint, it can be determined that the direction downward with the straight line is the direction in which the finger is located, wherein the preset angle can be 90 degrees. The above preferred solution determines the direction of the finger based on the position of the joint, which has a high accuracy.
如上所述,从图像中寻找具有明显特征的部位的方式有多种,作为一个优选的实施方式,上述步骤S21可以包括:As described above, there are a plurality of ways to find a portion having a distinct feature from the image. As a preferred embodiment, the above step S21 may include:
S211,从第一图像中去除前景和/或背景图像。从图像中确定前景和背景图像的方法有多种,由于人体的皮肤颜色数据具有一定的数值范围,所以在图像中,手部区域的像素点的RGB值都应当在某一范围内,由此可以根据预设的RGB值范围对图像中的内容进行判断,可筛选出目标图像和背景图像;也可以根据图像的锐度值或深度值对图像中的内容进行判断和去除,现有的背景去除方法都是可行的。S211, removing the foreground and/or background image from the first image. There are various methods for determining the foreground and background images from the image. Since the skin color data of the human body has a certain range of values, in the image, the RGB values of the pixel points of the hand region should be within a certain range, thereby The content in the image can be judged according to the preset RGB value range, and the target image and the background image can be selected; the content in the image can be judged and removed according to the sharpness value or the depth value of the image, and the existing background is Removal methods are all feasible.
S212,在去除了前景和/或背景图像后的第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的轮廓,去除了背景图像后的手部图像中只保留有皮肤区域,可以认为该区域为手的区域。为了识别手指部分,需依据手指的形态特征进行判别。因而,可采用Canny算子提取手部区域的边缘轮廓。Canny算子通过对信噪比与定位乘积进行测度,通过最优化方法进行逼近,得到边缘信息。S212. Identify, in the first image after the foreground and/or the background image is removed, an outline of a joint joint of at least two of the index finger, the middle finger, the ring finger, and the little finger and the back of the hand, and only the hand image after the background image is removed. There is a skin area that can be considered as a hand area. In order to identify the finger portion, it is necessary to discriminate based on the morphological characteristics of the finger. Thus, the Canny operator can be used to extract the edge contour of the hand region. The Canny operator measures the signal-to-noise ratio and the positioning product, and approximates it by the optimization method to obtain the edge information.
具体地址,首先需要对图像进行高斯滤波平滑图像噪声,以减小噪声对检测结果的影响,高斯核函数如下:
For the specific address, Gaussian filtering of the image is first required to smooth the image noise to reduce the influence of noise on the detection result. The Gaussian kernel function is as follows:
然后计算图像灰度值的梯度,即做两个方向的一阶差分。计算每个像素点的梯度幅度及方向。Then calculate the gradient of the gray value of the image, that is, make the first order difference in both directions. Calculate the gradient magnitude and direction of each pixel.
f′x(x,y)≈Gx=[f(x+1,y)-f(x,y)+f(x+1,y+1)-f(x,y+1)]/2f' x (x,y)≈G x =[f(x+1,y)-f(x,y)+f(x+1,y+1)-f(x,y+1)]/ 2
f′y(x,y)≈Gy=[f(x,y+1)-f(x,y)+f(x+1,y+1)-f(x+1,y)]/2f' y (x,y)≈G y =[f(x,y+1)-f(x,y)+f(x+1,y+1)-f(x+1,y)]/ 2
相应的强度与方向为:The corresponding strength and direction are:
θ[x,y]=arctan(Gx(x,y)/Gy(x,y));θ[x,y]=arctan(G x (x,y)/Gy(x,y));
得到整个图像中每个点的梯度幅度与方向后,计算局部最大值,保留相应的像素点。最后,根据双阈值计算应该保留的像素点,对于保留下的像素点进行边界追踪,完成边缘提取。After obtaining the gradient magnitude and direction of each point in the entire image, the local maximum is calculated and the corresponding pixel points are retained. Finally, the pixel points that should be reserved are calculated according to the double threshold, and the boundary tracking is performed for the remaining pixel points to complete the edge extraction.
S213,根据轮廓的曲率识别出关节的顶点,得到手背边缘后,可以利用手背形态进行关节凸起部位的提取。通过对于关节凸起部分形态的分析可以知道,关节凸起边缘具有曲率突变的情况,即关节凸起两侧线条走向较为一致,而凸起线条弯曲程度较大,且接近180度转弯。S213, the apex of the joint is recognized according to the curvature of the contour, and after the back edge of the hand is obtained, the joint convex portion can be extracted by the shape of the back of the hand. By analyzing the shape of the convex part of the joint, it can be known that the convex edge of the joint has a sudden change in curvature, that is, the line on both sides of the joint convex is relatively uniform, and the convex line is curved to a greater extent and is close to a 180 degree turn.
基于上述特性,首先对图像边缘进行采样并矢量化图像边缘线,以形成具有长度与走向统计的特征线。矢量化时,依据像素点间的位置求距离以及一阶差分得到方向走向。然后,计算这些矢量的点乘结果,得到矢量线间的夹角大小。而后,针对所有边缘寻找所有的较直的直线段(例如平均夹角不大于25度)。按顺序排列这些直线段,计算直线段间曲线段的走向变化,例如将走向变化大于140度,且距离大于一定阈值,则判定为相应的关节。去除相应噪声以及重复的结果,判定为关节凸起部位。Based on the above characteristics, the edges of the image are first sampled and the image edge lines are vectorized to form a feature line having length and trend statistics. In vectorization, the direction is obtained according to the position between the pixels and the first-order difference. Then, the point multiplication results of these vectors are calculated to obtain the angle between the vector lines. Then, find all straight straight segments for all edges (for example, the average angle is no more than 25 degrees). The straight line segments are arranged in order, and the change of the trend of the curved segments between the straight segments is calculated. For example, if the change of the strike is greater than 140 degrees and the distance is greater than a certain threshold, the corresponding joint is determined. The corresponding noise and the repeated result are removed, and it is determined as a convex portion of the joint.
作为一个优选的实施方式,上述步骤S211可以包括:As a preferred implementation manner, the foregoing step S211 may include:
S2111a,对第一图像进行色彩空间转换处理;人体皮肤由真皮层和覆盖其上的较薄的表皮层构成,光在表皮层中被黑色素吸收,而在真皮层中则同时发生吸收和散射。不同个体的肤色差异主要表现为由表皮层中黑色
素的浓度不同所引起的亮度变化,其真皮层光学特性则基本相同,而且同种族的个体肤色具有较强的共性,并明显区别于大多数背景颜色,在颜色空间中形成一个小而紧致的聚簇。因而,基于颜色进行皮肤的检测是可行的。S2111a, performing color space conversion processing on the first image; the human skin is composed of a dermis layer and a thin skin layer covering the skin layer, and light is absorbed by melanin in the epidermis layer, and absorption and scattering occur simultaneously in the dermis layer. The difference in skin color of different individuals is mainly manifested by the black layer in the epidermis
The change in brightness caused by the difference in the concentration of the pigment, the optical properties of the dermis layer are basically the same, and the individual skin color of the same race has strong commonality, and is clearly different from most background colors, forming a small and compact in the color space. Clustering. Thus, skin based detection based on color is feasible.
进行肤色检测需要选择恰当的彩色空间,在此空间中肤色能团簇、聚合在一起,并且与非肤色的重叠部分要尽可能少。摄像头采集的图像是RGB图像,在RGB彩色空间中,肤色与非肤色的重叠部分较多,且会受亮度的影响严重;在HSV彩色空间中由于色调、色饱和度及亮度很好的分离,与非肤色点重叠的较少;在YCbCr彩色空间中的CbCr子空间上,肤色很好的聚集在一个类椭圆范围内,而且在Cb、Cr分量上的分布也比较集中。因此,将手部图像由RGB空间转换为YCbCr彩色空间或HSV彩色空间都是可行的。To perform skin color detection, you need to select the appropriate color space in which skin tones can be clustered, aggregated, and overlapped with non-skinned skin as little as possible. The image captured by the camera is an RGB image. In the RGB color space, the overlap between the skin color and the non-skin tone is more, and it is seriously affected by the brightness; in the HSV color space, due to the good separation of hue, color saturation and brightness, There is less overlap with non-skinning points; in the CbCr subspace in the YCbCr color space, the skin color is well concentrated in an ellipse-like range, and the distribution on the Cb and Cr components is also concentrated. Therefore, it is feasible to convert the hand image from the RGB space to the YCbCr color space or the HSV color space.
RGB至HSV的转换公式为:The conversion formula for RGB to HSV is:
RGB至YCbCr的转换公式为:The conversion formula for RGB to YCbCr is:
Y=0.257R′+0.504G′+0.098B′+16Y=0.257R'+0.504G'+0.098B'+16
Cb=0.148R′-0.291G′+0.439B′+128Cb=0.148R'-0.291G'+0.439B'+128
Cr=0.439R′-0.368G′-0.071B′+128。Cr = 0.439 R' - 0.368 G' - 0.071 B' + 128.
S2112a,对经过色彩空间转换处理后的第一图像进行二值化处理,经过转换,可将图像转换为只有黑白两种颜色的线条图;S2112a, performing binarization processing on the first image subjected to color space conversion processing, and converting the image into a line drawing having only two colors of black and white;
S2113a,在经过二值化处理后的第一图像中去除前景和/或背景图像。上述优选方案可以进一步提高识别操作的准确性。S2113a, removing foreground and/or background images in the first image after binarization. The above preferred solution can further improve the accuracy of the recognition operation.
作为另一个优选的实施方式,上述步骤S211可以包括:As another preferred implementation manner, the foregoing step S211 may include:
S2111b,获取第一图像中各个像素点的深度值
S2111b, obtaining a depth value of each pixel in the first image
S2112b,将所述各个像素点的深度值与预设深度范围值进行比较,以从所述第一图像中中确定手指图像、前景和/或背景图像。由于腕式设备的特殊性,需成像的部分距离摄像装置约为10-15cm左右,因此摄像装置的对焦点可以是固定的,仅需保证10-15cm内的对焦清晰即可;同时,当抬腕操作时,成像范围内的其他物体(前景和背景)通常距离手部较近或较远,不在10-15cm距离范围内,因此前景和背景失焦,通过模糊度的算法可以容易的分辨前背景。S2112b, comparing the depth value of each pixel point with a preset depth range value to determine a finger image, a foreground, and/or a background image from the first image. Due to the particularity of the wrist device, the part of the imaging device to be imaged is about 10-15 cm, so the focus point of the camera device can be fixed, only need to ensure the focus within 10-15 cm is clear; at the same time, when lifting When the wrist is operated, other objects (foreground and background) in the imaging range are usually closer or farther from the hand, not within the range of 10-15 cm, so the foreground and background are out of focus, and the algorithm can be easily distinguished by the algorithm of ambiguity. background.
S212b,去除前景和/或背景图像,其中距离第一摄像装置11过近的内容为前景图像,距离第一摄像装置11过远的内容为背景图像。S212b, the foreground and/or background image is removed, wherein the content that is too close to the first camera 11 is the foreground image, and the content that is too far from the first camera 11 is the background image.
上述优选方案根据景深信息将前景和背景图像均去除掉,只保留手背前部的景象,然后可在该景象中进一步识别关节,由此可进一步提高识别效率。The above preferred solution removes both the foreground and background images based on the depth of field information, retaining only the front of the back of the hand, and then further identifying the joints in the scene, thereby further improving the recognition efficiency.
实施例4Example 4
本发明实施例提供一种在图像中确定手指所处区域的装置,如图9所示包括:An embodiment of the present invention provides a device for determining an area where a finger is located in an image, as shown in FIG.
获取单元91,用于获取第一图像和第二图像,其中所述第一图像是沿手腕向手背方向拍摄的图像,所述第二图像是沿所述手腕向手心方向拍摄的图像;The acquiring unit 91 is configured to acquire the first image and the second image, wherein the first image is an image taken along the wrist toward the back of the hand, and the second image is an image taken along the wrist toward the palm direction;
方向确定单元92,用于根据所述第一图像确定手指所处的方向;a direction determining unit 92, configured to determine a direction in which the finger is located according to the first image;
区域确定单元93,用于根据所述手指所处的方向确定手指在所述第二图像中所处的区域。The area determining unit 93 is configured to determine an area where the finger is located in the second image according to a direction in which the finger is located.
根据上述方案,通过对沿手腕向手背方向拍摄的图像进行分析,可得到手指所处的方向,然后根据该方向在沿手腕向手心方向拍摄的图像中即可确定手指所在的区域,进而可以在该区域中判断手指特定部位的活动,最终根据手指特定部位的活动实现各种控制操作。利用该方案识别手指特定部位活动时不需在整张图像中寻找特定部位,只需在确定出的区域中寻找手指特定部位即可,由此该方案可提高在图像中寻找手指所处区域的效
率,从而可提高手指活动识别操作的效率。According to the above scheme, by analyzing the image taken along the wrist to the back of the hand, the direction in which the finger is located can be obtained, and then according to the direction, the image of the finger can be determined in the image taken along the wrist to the palm, and then the In this area, the activity of a specific part of the finger is judged, and finally various control operations are performed according to the activity of the specific part of the finger. By using the scheme to identify a specific part of the finger, it is not necessary to find a specific part in the entire image, and it is only necessary to find a specific part of the finger in the determined area, thereby improving the area where the finger is located in the image. Effect
Rate, which improves the efficiency of finger activity recognition operations.
优选地,所述方向确定单元92包括:Preferably, the direction determining unit 92 includes:
顶点确定单元,用于在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点;a vertex determining unit, configured to identify, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger, and the little finger and the connecting joint of the back of the hand;
拟合单元,用于利用各个所述关节的顶点拟合直线;a fitting unit for fitting a straight line with vertices of each of the joints;
角度方向确定单元,用于将与所述直线成预定角度的方向作为所述手指所处的方向。An angle direction determining unit for taking a direction at a predetermined angle with the straight line as a direction in which the finger is located.
上述优选方案根据关节位置确定手指方向,其准确率较高。The above preferred solution determines the direction of the finger based on the position of the joint, which has a high accuracy.
优选地,所述顶点确定单元包括:Preferably, the vertex determining unit comprises:
背景去除单元,用于从所述第一图像中去除背景图像;a background removing unit, configured to remove a background image from the first image;
轮廓识别单元,用于在去除了背景图像后的第一图像中识别出各个手指与手背的连接关节的轮廓;a contour identifying unit, configured to identify an outline of a joint joint of each finger and the back of the hand in the first image after the background image is removed;
顶点识别单元,用于根据所述轮廓的曲率识别出关节的顶点。A vertex recognition unit for identifying a vertex of the joint according to the curvature of the contour.
优选地,所述背景去除单元包括:Preferably, the background removal unit comprises:
色彩空间转换单元,用于对所述第一图像进行色彩空间转换处理;a color space conversion unit, configured to perform color space conversion processing on the first image;
二值化处理单元,用于对经过色彩空间转换处理后的第一图像进行二值化处理;a binarization processing unit, configured to perform binarization processing on the first image subjected to color space conversion processing;
背景处理单元,用于在经过二值化处理后的第一图像中去除背景图像。And a background processing unit, configured to remove the background image in the first image after the binarization process.
上述优选方案可以进一步提高识别操作的准确性。The above preferred solution can further improve the accuracy of the recognition operation.
优选地,所述顶点确定单元包括:Preferably, the vertex determining unit comprises:
图像确定单元,用于根据所述第一图像的中各个像素点的深度值以及预设深度范围值从所述第一图像中确定各个手指与手背的连接关节的图像、前景和/或背景图像;An image determining unit, configured to determine, according to a depth value of each pixel point in the first image and a preset depth range value, an image, a foreground, and/or a background image of a joint joint of each finger and the back of the hand from the first image ;
图像去除单元,用于去除所述前景和/或背景图像;An image removing unit, configured to remove the foreground and/or background image;
轮廓识别单元,用于在去除了前景和/或背景图像后的第一图像中识别出各个手指与手背的连接关节的轮廓;a contour recognition unit, configured to identify an outline of a joint joint of each finger and the back of the hand in the first image after the foreground and/or the background image is removed;
顶点识别单元,用于根据所述轮廓的曲率识别出关节的顶点。A vertex recognition unit for identifying a vertex of the joint according to the curvature of the contour.
上述优选方案根据景深信息将前景和背景图像均去除掉,只保留手背
前部的景象,然后可在该景象中进一步识别关节,由此可进一步提高识别效率。The above preferred solution removes both the foreground and background images according to the depth of field information, leaving only the back of the hand
The front scene can then further identify the joints in the scene, thereby further improving the recognition efficiency.
显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。
It is apparent that the above-described embodiments are merely illustrative of the examples, and are not intended to limit the embodiments. Other variations or modifications of the various forms may be made by those skilled in the art in light of the above description. There is no need and no way to exhaust all of the implementations. Obvious changes or variations resulting therefrom are still within the scope of the invention.
Claims (18)
- 一种在图像中确定手指所处区域的方法,其特征在于,包括如下步骤:A method for determining an area in which a finger is located in an image, comprising the steps of:获取第一图像和第二图像,其中所述第一图像是沿手腕向手背方向拍摄的图像,所述第二图像是沿所述手腕向手心方向拍摄的图像;Acquiring the first image and the second image, wherein the first image is an image taken along the wrist toward the back of the hand, and the second image is an image taken along the wrist toward the palm of the hand;根据所述第一图像确定手指所处的方向;Determining a direction in which the finger is located according to the first image;根据所述手指所处的方向确定手指在所述第二图像中所处的区域。The area in which the finger is located in the second image is determined according to the direction in which the finger is located.
- 根据权利要求1所述的方法,其特征在于,所述根据所述第一图像确定手指所处的方向,包括:The method according to claim 1, wherein the determining the direction in which the finger is located according to the first image comprises:在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点;Identifying, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger, and the little finger and the connecting joint of the back of the hand;利用各个所述关节的顶点拟合直线;Fitting a straight line with the vertices of each of the joints;将与所述直线成预定角度的方向作为所述手指所处的方向。A direction at a predetermined angle with the straight line is taken as a direction in which the finger is located.
- 根据权利要求2所述的方法,其特征在于,所述在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点,包括:The method according to claim 2, wherein the identifying, in the first image, the apex of at least two of the index finger, the middle finger, the ring finger and the little finger and the connecting joint of the back of the hand comprises:从所述第一图像中去除前景和/或背景图像;Removing foreground and/or background images from the first image;在去除了前景和/或背景图像后的第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的轮廓;Identifying an outline of a joint joint of at least two of the index finger, the middle finger, the ring finger, and the little finger with the back of the hand in the first image after the foreground and/or background image is removed;根据所述轮廓的曲率识别出关节的顶点。The vertices of the joint are identified based on the curvature of the contour.
- 根据权利要求3所述的方法,其特征在于,所述从所述第一图像中去除前景和/或背景图像,包括:The method of claim 3 wherein said removing foreground and/or background images from said first image comprises:对所述第一图像进行色彩空间转换处理;Performing a color space conversion process on the first image;对经过色彩空间转换处理后的第一图像进行二值化处理;Performing binarization processing on the first image subjected to color space conversion processing;在经过二值化处理后的第一图像中去除前景和/或背景图像。The foreground and/or background image is removed in the first image after binarization.
- 根据权利要求3所述的方法,其特征在于,所述从所述第一图像中去除前景和/或背景图像,包括: The method of claim 3 wherein said removing foreground and/or background images from said first image comprises:获取所述第一图像中各个像素点的深度值;Obtaining a depth value of each pixel in the first image;将所述各个像素点的深度值与预设深度范围值进行比较,以从所述第一图像中确定手指图像、前景和/或背景图像;Comparing the depth values of the respective pixel points with a preset depth range value to determine a finger image, a foreground, and/or a background image from the first image;去除所述前景和/或背景图像。The foreground and/or background image is removed.
- 一种在图像中确定手指所处区域的装置,其特征在于,包括:A device for determining an area in which a finger is located in an image, comprising:获取单元,用于获取第一图像和第二图像,其中所述第一图像是沿手腕向手背方向拍摄的图像,所述第二图像是沿所述手腕向手心方向拍摄的图像;An acquiring unit, configured to acquire a first image and a second image, wherein the first image is an image taken along a wrist toward a back of the hand, and the second image is an image taken along a direction of the wrist toward a palm;方向确定单元,用于根据所述第一图像确定手指所处的方向;a direction determining unit, configured to determine a direction in which the finger is located according to the first image;区域确定单元,用于根据所述手指所处的方向确定手指在所述第二图像中所处的区域。And an area determining unit, configured to determine an area where the finger is located in the second image according to a direction in which the finger is located.
- 根据权利要求6所述的装置,其特征在于,所述方向确定单元包括:The apparatus according to claim 6, wherein the direction determining unit comprises:顶点确定单元,用于在所述第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的顶点;a vertex determining unit, configured to identify, in the first image, an apex of at least two of the index finger, the middle finger, the ring finger, and the little finger and the connecting joint of the back of the hand;拟合单元,用于利用各个所述关节的顶点拟合直线;a fitting unit for fitting a straight line with vertices of each of the joints;角度方向确定单元,用于将与所述直线成预定角度的方向作为所述手指所处的方向。An angle direction determining unit for taking a direction at a predetermined angle with the straight line as a direction in which the finger is located.
- 根据权利要求7所述的装置,其特征在于,所述顶点确定单元包括:The apparatus according to claim 7, wherein said vertex determining unit comprises:背景去除单元,用于从所述第一图像中去除前景和/或背景图像;a background removing unit, configured to remove a foreground and/or a background image from the first image;轮廓识别单元,用于在去除了前景和/或背景图像后的第一图像中识别出食指、中指、无名指和小拇指中的至少两个与手背的连接关节的轮廓;a contour identifying unit, configured to identify an outline of a joint joint of at least two of the index finger, the middle finger, the ring finger and the little finger and the back of the hand in the first image after the foreground and/or background image is removed;顶点识别单元,用于根据所述轮廓的曲率识别出关节的顶点。A vertex recognition unit for identifying a vertex of the joint according to the curvature of the contour.
- 根据权利要求8所述的装置,其特征在于,所述背景去除单元包括:The apparatus according to claim 8, wherein the background removal unit comprises:色彩空间转换单元,用于对所述第一图像进行色彩空间转换处理;a color space conversion unit, configured to perform color space conversion processing on the first image;二值化处理单元,用于对经过色彩空间转换处理后的第一图像进行二值化处理;a binarization processing unit, configured to perform binarization processing on the first image subjected to color space conversion processing;背景处理单元,用于在经过二值化处理后的第一图像中去除前景和/或 背景图像。a background processing unit, configured to remove foreground and/or in the first image after binarization Background image.
- 根据权利要求8所述的装置,其特征在于,所述背景去除单元包括:The apparatus according to claim 8, wherein the background removal unit comprises:深度值获取单元,用于获取所述第一图像中各个像素点的深度值;a depth value obtaining unit, configured to acquire a depth value of each pixel in the first image;图像确定单元,用于将所述各个像素点的深度值与预设深度范围值进行比较,以从所述第一图像中确定手指图像、前景和/或背景图像;An image determining unit, configured to compare a depth value of each pixel point with a preset depth range value to determine a finger image, a foreground, and/or a background image from the first image;图像去除单元,用于去除所述前景和/或背景图像。An image removal unit for removing the foreground and/or background image.
- 一种腕式设备,其特征在于,包括:A wrist device, comprising:腕带;Wristband第一摄像装置,设置在所述腕带上;a first camera device disposed on the wristband;第二摄像装置,设置在所述腕带上且与所述第一摄像装置相对,并且所述第二摄像装置镜头所指向的方向与所述第一摄像装置相同;a second camera device disposed on the wristband and opposite to the first camera device, and the lens of the second camera device is pointed in the same direction as the first camera device;处理器,用于对所述第一摄像装置和所述第二摄像装置采集的手部图像进行处理。And a processor, configured to process the hand images collected by the first camera device and the second camera device.
- 根据权利要求11所述的腕式设备,其特征在于,所述第一摄像装置用于沿手腕向手背方向拍摄第一图像,所述第二摄像装置用于沿所述手腕向手心方向拍摄第二图像,所述处理器利用权利要求1-5中任一项所述的方法确定手指在所述第二图像中所处的区域。The wrist device according to claim 11, wherein the first imaging device is configured to capture a first image along the wrist toward the back of the hand, and the second imaging device is configured to shoot along the wrist toward the palm A second image, the processor determining the region in which the finger is located in the second image, using the method of any of claims 1-5.
- 根据权利要求11或12所述的腕式设备,其特征在于,所述腕式设备为智能手表,所述处理器设置在表盘内,所述第一摄像装置设置和第二摄像装置分别设置在表盘处和表带上,设置在表带上的摄像装置与所述处理器通过设置在表带内的连接部件连接。The wrist device according to claim 11 or 12, wherein the wrist device is a smart watch, the processor is disposed in a dial, and the first camera device and the second camera device are respectively disposed at At the dial and on the strap, the camera mounted on the strap is coupled to the processor via a connecting member disposed within the strap.
- 根据权利要求13所述的腕式设备,其特征在于,所述连接部件为柔性电路板。The wrist device according to claim 13, wherein the connecting member is a flexible circuit board.
- 一种腕式设备,其特征在于,包括:A wrist device, comprising:腕带;Wristband第一摄像装置,设置在所述腕带上;a first camera device disposed on the wristband;第二摄像装置,设置在所述腕带上且与所述第一摄像装置相对,并且所述第二摄像装置镜头所指向的方向与所述第一摄像装置相同; a second camera device disposed on the wristband and opposite to the first camera device, and the lens of the second camera device is pointed in the same direction as the first camera device;第一处理器,用于对所述第一摄像装置采集的手部图像进行处理;a first processor, configured to process a hand image collected by the first camera device;第二处理器,用于根据所述第一处理器的处理结果对所述第二摄像装置采集的手部图像进行处理。And a second processor, configured to process the hand image collected by the second camera according to the processing result of the first processor.
- 根据权利要求15所述的腕式设备,其特征在于,A wrist device according to claim 15, wherein所述第一摄像装置用于沿手腕向手背方向拍摄第一图像;The first imaging device is configured to capture a first image along the wrist to the back of the hand;所述第二摄像装置用于沿所述手腕向手心方向拍摄第二图像;The second imaging device is configured to capture a second image along the wrist toward the palm of the hand;所述第一处理器用于根据所述第一图像确定手指所处的方向;The first processor is configured to determine a direction in which the finger is located according to the first image;所述第二处理器用于根据所述手指所处的方向确定手指在所述第二图像中所处的区域。The second processor is configured to determine an area in which the finger is located in the second image according to a direction in which the finger is located.
- 根据权利要求15或16所述的腕式设备,其特征在于,所述腕式设备为智能手表,所述第一处理器和第二处理器分别设置在表盘处和表带上,所述第一摄像装置和第二摄像装置分别设置在表盘处和表带上,设置在表带上的处理器与设置在表盘内的处理器通过设置在表带内的连接部件连接。The wrist device according to claim 15 or 16, wherein the wrist device is a smart watch, and the first processor and the second processor are respectively disposed at a dial and a watchband, wherein the A camera device and a second camera device are respectively disposed at the dial and the watch band, and the processor disposed on the watch band and the processor disposed in the dial are connected by a connecting member disposed in the watch band.
- 根据权利要求17所述的腕式设备,其特征在于,所述连接部件为柔性电路板。 The wrist device according to claim 17, wherein the connecting member is a flexible circuit board.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511031085.0 | 2015-12-31 | ||
CN201511031085.0A CN106933341B (en) | 2015-12-31 | 2015-12-31 | Method and device for determining region of finger in image and wrist type equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017113793A1 true WO2017113793A1 (en) | 2017-07-06 |
Family
ID=59224451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/093225 WO2017113793A1 (en) | 2015-12-31 | 2016-08-04 | Method and apparatus for determining area of finger in image, and a wrist-type device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106933341B (en) |
WO (1) | WO2017113793A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163045B (en) * | 2018-06-07 | 2024-08-09 | 腾讯科技(深圳)有限公司 | Gesture recognition method, device and equipment |
CN111666792B (en) * | 2019-03-07 | 2023-04-28 | 阿里巴巴集团控股有限公司 | Image recognition method, image acquisition and recognition method, and commodity recognition method |
CN111930004A (en) * | 2020-09-09 | 2020-11-13 | 深圳五洲无线股份有限公司 | Behavior monitoring system |
CN112839172B (en) * | 2020-12-31 | 2022-02-18 | 深圳瞬玩科技有限公司 | Shooting subject identification method and system based on hand identification |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005301583A (en) * | 2004-04-09 | 2005-10-27 | Nara Institute Of Science & Technology | Typing input device |
DE102005011432A1 (en) * | 2005-03-12 | 2006-09-14 | Volkswagen Ag | Data glove for virtual-reality-system, has reference body comprising hand reference points identified as reference points by infrared camera, and retaining unit retaining person`s finger tip and comprising finger reference points |
JP2010271978A (en) * | 2009-05-22 | 2010-12-02 | Nippon Telegr & Teleph Corp <Ntt> | Behavior estimating device |
CN202584010U (en) * | 2012-04-06 | 2012-12-05 | 寇传阳 | Wrist-mounting gesture control system |
US20120319940A1 (en) * | 2011-06-16 | 2012-12-20 | Daniel Bress | Wearable Digital Input Device for Multipoint Free Space Data Collection and Analysis |
CN102915111A (en) * | 2012-04-06 | 2013-02-06 | 寇传阳 | Wrist gesture control system and method |
CN104063059A (en) * | 2014-07-13 | 2014-09-24 | 华东理工大学 | Real-time gesture recognition method based on finger division |
CN105027030A (en) * | 2012-11-01 | 2015-11-04 | 艾卡姆有限公司 | Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing |
CN105184268A (en) * | 2015-09-15 | 2015-12-23 | 北京国承万通信息科技有限公司 | Gesture recognition device, gesture recognition method, and virtual reality system |
CN205485915U (en) * | 2015-12-31 | 2016-08-17 | 北京体基科技有限公司 | Wrist formula equipment |
-
2015
- 2015-12-31 CN CN201511031085.0A patent/CN106933341B/en active Active
-
2016
- 2016-08-04 WO PCT/CN2016/093225 patent/WO2017113793A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005301583A (en) * | 2004-04-09 | 2005-10-27 | Nara Institute Of Science & Technology | Typing input device |
DE102005011432A1 (en) * | 2005-03-12 | 2006-09-14 | Volkswagen Ag | Data glove for virtual-reality-system, has reference body comprising hand reference points identified as reference points by infrared camera, and retaining unit retaining person`s finger tip and comprising finger reference points |
JP2010271978A (en) * | 2009-05-22 | 2010-12-02 | Nippon Telegr & Teleph Corp <Ntt> | Behavior estimating device |
US20120319940A1 (en) * | 2011-06-16 | 2012-12-20 | Daniel Bress | Wearable Digital Input Device for Multipoint Free Space Data Collection and Analysis |
CN202584010U (en) * | 2012-04-06 | 2012-12-05 | 寇传阳 | Wrist-mounting gesture control system |
CN102915111A (en) * | 2012-04-06 | 2013-02-06 | 寇传阳 | Wrist gesture control system and method |
CN105027030A (en) * | 2012-11-01 | 2015-11-04 | 艾卡姆有限公司 | Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing |
CN104063059A (en) * | 2014-07-13 | 2014-09-24 | 华东理工大学 | Real-time gesture recognition method based on finger division |
CN105184268A (en) * | 2015-09-15 | 2015-12-23 | 北京国承万通信息科技有限公司 | Gesture recognition device, gesture recognition method, and virtual reality system |
CN205485915U (en) * | 2015-12-31 | 2016-08-17 | 北京体基科技有限公司 | Wrist formula equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106933341B (en) | 2024-04-26 |
CN106933341A (en) | 2017-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106933340B (en) | Gesture motion recognition method, control method and device and wrist type equipment | |
CN107949863B (en) | Authentication device and authentication method using biometric information | |
JP7242528B2 (en) | Systems and methods for performing fingerprint user authentication using images captured using mobile devices | |
CN107438854B (en) | System and method for performing fingerprint-based user authentication using images captured by a mobile device | |
KR102420100B1 (en) | Electronic apparatus for providing health status information, method for controlling the same, and computer-readable storage medium | |
JP4307496B2 (en) | Facial part detection device and program | |
US11715231B2 (en) | Head pose estimation from local eye region | |
US8213690B2 (en) | Image processing apparatus including similarity calculating unit, image pickup apparatus, and processing method for the apparatuses | |
US10311583B2 (en) | Eye motion detection method, program, program storage medium, and eye motion detection device | |
WO2017113793A1 (en) | Method and apparatus for determining area of finger in image, and a wrist-type device | |
CN109101873A (en) | For providing the electronic equipment for being directed to the characteristic information of external light source of object of interest | |
CN104933344A (en) | Mobile terminal user identity authentication device and method based on multiple biological feature modals | |
CN204791017U (en) | Mobile terminal users authentication device based on many biological characteristics mode | |
CN112884666B (en) | Image processing method, device and computer storage medium | |
JP7519871B2 (en) | Biometric authentication device and method | |
CN103218615B (en) | Face judgment method | |
CN110427108A (en) | Photographic method and Related product based on eyeball tracking | |
JP7513451B2 (en) | Biometric authentication device and method | |
CN114187166A (en) | Image processing method, intelligent terminal and storage medium | |
CN205485915U (en) | Wrist formula equipment | |
CN111557007B (en) | Method for detecting opening and closing states of eyes and electronic equipment | |
JP2004157778A (en) | Nose position extraction method, program for operating it on computer, and nose position extraction device | |
CN115951783A (en) | Computer man-machine interaction method based on gesture recognition | |
JP7384157B2 (en) | Information processing devices, wearable devices, information processing methods and programs | |
JP2018180660A (en) | Biometric authentication device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16880588 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.10.18) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16880588 Country of ref document: EP Kind code of ref document: A1 |