WO2017161778A1 - 手掌的掌心位置定位、手势识别方法、装置及智能终端 - Google Patents

手掌的掌心位置定位、手势识别方法、装置及智能终端 Download PDF

Info

Publication number
WO2017161778A1
WO2017161778A1 PCT/CN2016/089380 CN2016089380W WO2017161778A1 WO 2017161778 A1 WO2017161778 A1 WO 2017161778A1 CN 2016089380 W CN2016089380 W CN 2016089380W WO 2017161778 A1 WO2017161778 A1 WO 2017161778A1
Authority
WO
WIPO (PCT)
Prior art keywords
palm
outer contour
position information
image
acquiring
Prior art date
Application number
PCT/CN2016/089380
Other languages
English (en)
French (fr)
Inventor
李艳杰
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/245,159 priority Critical patent/US20170277944A1/en
Publication of WO2017161778A1 publication Critical patent/WO2017161778A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Definitions

  • the invention relates to the technical field of computer vision and image processing, and particularly relates to a palm position positioning, a gesture recognition method, a device and an intelligent terminal of a palm.
  • gesture recognition is an important research direction in the field of human-computer interaction, and plays an important role in building intelligent human-computer interaction.
  • the center of the distance between the two convex defect points is the center of the circle, and a circle is constructed to determine whether the circle includes all the depth points of the convex defect. If included, the circle is positioned as the inward circle of the palm to position the palm; if not Including, it is also necessary to arbitrarily select a convex defect depth point outside the circle, and then determine whether the shape of the triangle formed by the depth points of the three convex defects can form a right-angled triangle or an obtuse-angled triangle, and if so, will be opposite to a right angle or an obtuse angle.
  • the two convex defect depth points reconstruct a circle according to the above method to determine whether the circle includes all convex defect depth points.
  • the circle is positioned as the palm inscribed circle to position the palm; if not included, Repeat this process until you construct a circle that can include all the depth points of the convex defect as the palm
  • the circle is positioned to position the palm; if the shape of the triangle formed by the depth points of the three convex defects is an acute triangle, the step is more complicated.
  • an circumcircle is constructed according to the acute triangle formed by the depth points of the three convex defects. And then determine whether the circle includes all the convex defect depth points. If included, the circle is positioned as the palm inscribed circle to the palm position; if not, the convex defect depth point is re-selected and the above operation is repeated.
  • the technical problem to be solved by the present invention is to overcome the complicated steps and low accuracy of the palm position positioning method in the prior art, thereby providing a palm position positioning and gesture recognition method and device with simple and easy operation and high accuracy. And smart terminals.
  • the invention provides a method for positioning a palm position of a palm, comprising the following steps:
  • Position information of the palm of the palm is acquired according to a maximum value of a shortest distance from each pixel point in the connected area in the outer contour to the outer contour.
  • the step of acquiring the position information of the palm of the palm according to the maximum value of the shortest distance from each pixel point in the connected area in the outer contour to the outer contour further includes:
  • the acquiring the position information of the palm of the palm according to the maximum value of the shortest distance from each pixel point in the connected area in the outer contour to the outer contour includes:
  • the invention also provides a palm position locating device for a palm, comprising:
  • An image acquisition unit for acquiring an image including a palm and an arm
  • a connected area acquiring unit configured to acquire a connected area of the palm and the arm in the image based on the skin color feature in the image
  • An outer contour acquiring unit configured to acquire an outer contour of the connected area
  • the position information acquiring unit is configured to acquire position information of the palm of the palm according to a maximum value of a shortest distance from each pixel point in the connected area in the outer contour to the outer contour.
  • the above device further includes:
  • a filling unit configured to fill the hole when there is a hole in the connected area in the outer contour.
  • the invention also provides a gesture recognition method, comprising the following steps:
  • the gesture recognition is performed according to the change value of the position information of the palm of the hand acquired in the predetermined time and/or the change value of the area value of the inscribed circle of the palm.
  • the invention also provides a gesture recognition device, comprising:
  • a position information acquiring unit configured to acquire position information of a palm of the palm by using a palm position positioning method of the palm
  • a palm inscribed circle determining unit configured to use the palm as a center, and determine the inscribed circle of the palm with the maximum value as a radius;
  • the gesture recognition unit is configured to perform gesture recognition according to a change value of the position information of the palm of the palm acquired in the predetermined time and/or a change value of the area value of the inscribed circle of the palm.
  • the present invention also provides an intelligent terminal comprising the palm position positioning device of the above palm and/or the above gesture recognition device.
  • the invention also provides an intelligent terminal, comprising an image acquisition device and a palm position positioning device of the above palm;
  • the image acquisition device is configured to collect an image including a palm and an arm.
  • the invention also provides an intelligent terminal, comprising an image acquisition device and the above gesture recognition device;
  • the image acquisition device is configured to collect an image including a palm and an arm.
  • the invention provides a method and a device for positioning a palm position of a palm, which first acquires an image including a palm and an arm; and then acquires a connected region of the palm and the arm in the image based on the skin color feature in the image; and then obtains an outer contour of the connected region, and clear
  • the boundary of the connected area is used to eliminate the error; finally, the position information of the palm of the palm can be obtained according to the maximum value of the shortest distance from each pixel point to the outer contour in the connected area in the outer contour.
  • the steps are simple and easy to perform, and the positioning accuracy is high.
  • the invention also provides a gesture recognition method and device, which adopts the palm position positioning method of the palm to obtain the position information of the palm of the palm; the center of the palm is used as the center, and the maximum circle is used as the radius to determine the inscribed circle of the palm; Gesture recognition is performed on the change value of the position information of the palm of the hand and/or the change value of the area value of the inscribed circle of the palm. Because of the position of the palm of the palm The steps are simple and easy to perform, and the positioning accuracy is high. Thereby, the steps of gesture recognition are also simplified, and the efficiency of gesture recognition is improved.
  • the present invention provides a palm position positioning device for a palm, comprising: one or more processors; a memory; one or more programs, the one or more programs being stored in the memory when When the plurality of processors are executed, performing an operation of: acquiring an image including a palm and an arm; acquiring a connected region of the palm and the arm in the image based on the skin color feature in the image; acquiring an outer contour of the connected region; The maximum value of the shortest distance from each pixel point in the connected area in the outer contour to the outer contour acquires position information of the palm of the palm.
  • the step of acquiring the position information of the palm of the palm according to the maximum value of the shortest distance from each pixel point in the connected area in the outer contour to the outer contour further comprises: A hole exists in the connected area in the outer contour, and the hole is filled.
  • the present invention further provides a gesture recognition apparatus comprising: one or more processors; a memory; one or more programs, the one or more programs being stored in the memory when the one or more
  • the processor executes, the following operations are performed: the palm position positioning method of the palm acquires position information of the palm of the palm; the palm is centered, and the maximum radius is used to determine the inscribed circle of the palm; according to the predetermined time
  • the acquired change value of the position information of the palm of the palm and/or the change value of the area value of the inscribed circle of the palm is gesture-recognized.
  • FIG. 1 is a flow chart showing a specific example of a method for positioning a palm position of a palm according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a specific example of an image including a palm and an arm acquired in a palm position positioning method of a palm according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic diagram of a specific example of a connecting area of a palm and an arm in a palm position positioning method of a palm according to Embodiment 1 of the present invention
  • FIG. 4 is a flowchart of a specific example of acquiring position information of a palm in a palm position positioning method of a palm according to Embodiment 1 of the present invention
  • FIG. 5 is a flow chart of a preferred specific example of a method for positioning a palm position of a palm according to Embodiment 1 of the present invention
  • FIG. 6 is a schematic diagram of a specific example of a connected area of a hole in a method for positioning a palm of a palm according to Embodiment 1 of the present invention
  • FIG. 7 is a schematic diagram showing the effect of filling a connected area of a hole in a method for positioning a palm position of a palm according to Embodiment 1 of the present invention.
  • FIG. 8 is a specific example of a palm position locating device of a palm according to Embodiment 2 of the present invention. Structure diagram;
  • FIG. 9 is a flowchart of a specific example of a gesture recognition method according to Embodiment 3 of the present invention.
  • FIG. 10 is a structural block diagram of a specific example of a gesture recognition apparatus according to Embodiment 4 of the present invention.
  • FIG. 11 is a schematic structural diagram of a palm position locating device having a processor according to Embodiment 8 of the present invention.
  • FIG. 12 is a schematic structural diagram of a palm position locating device having two processors according to Embodiment 8 of the present invention.
  • FIG. 13 is a schematic structural diagram of a gesture recognition apparatus having a processor according to Embodiment 9 of the present invention.
  • FIG. 14 is a schematic structural diagram of a gesture recognition apparatus with two processors according to Embodiment 9 of the present invention.
  • connection or integral connection; may be mechanical connection or electrical connection; may be directly connected, may also be indirectly connected through an intermediate medium, or may be internal communication of two components, may be wireless connection, or may be wired connection.
  • connection or integral connection; may be mechanical connection or electrical connection; may be directly connected, may also be indirectly connected through an intermediate medium, or may be internal communication of two components, may be wireless connection, or may be wired connection.
  • This embodiment provides a method for positioning a palm position of a palm, as shown in FIG. 1 , including the following steps:
  • an image including the palm and the arm Specifically, when the user makes a gesture in a shooting range of a device having an image capturing, photographing, or the like, such as a camera, an image including the palm and the arm of the user can be captured and transmitted to the storage device for storage. The above image including the palm and the arm can be acquired in the storage device.
  • Figure 2 is a picture of the palm and hand The image of the arm.
  • the cvFindContours function in OpenCV Open Source Computer Vision Library
  • OpenCV Open Source Computer Vision Library
  • the contour tracking based binary image deburring algorithm can be used to further remove the burr on the outer contour, the connected region of the palm and the arm. The boundaries are accurately divided to further improve the accuracy of positioning the palm position.
  • step S4 includes:
  • S43 Determine the position of the pixel corresponding to the maximum value as the palm position of the palm. Specifically, the shortest distance from the palm to the edge of the palm should be the largest, so the pixel corresponding to the maximum of the shortest distance is the palm position of the palm.
  • the position of the palm in the gesture can be obtained by the position of the palm corresponding to the connected area of the palm and the arm, thereby estimating the current hand shape; by acquiring the same palm
  • the coordinates of the pixel points corresponding to the plurality of palms in the predetermined time can acquire the movement trajectory of the palm and the like. Therefore, as long as the pixel corresponding to the palm position is determined, the position information of the palm, such as the position and the motion trajectory of the palm, can be obtained, and data support for the gesture recognition is provided.
  • the method further includes:
  • the method for positioning the palm position of the palm first acquires an image including the palm and the arm; and then obtains a connected region of the palm and the arm in the image based on the skin color feature in the image; and then obtains the outer contour of the connected region, and clarifies the connected region.
  • the limit is used to eliminate the error; finally, the position information of the palm of the palm can be obtained according to the maximum value of the shortest distance from each pixel point in the connected region in the outer contour to the outer contour.
  • the steps are simple and easy to perform, and the positioning accuracy is high.
  • This embodiment provides a palm position positioning device for a palm, as shown in FIG. 8, comprising:
  • the image acquisition unit 1 is configured to acquire an image including a palm and an arm.
  • the connected area acquiring unit 2 is configured to acquire a connected area of the palm and the arm in the image based on the skin color feature in the image.
  • the outer contour acquiring unit 3 is configured to acquire an outer contour of the connected area. By obtaining the outer contour of the connected area, the boundary between the palm and the arm's connected area can be accurately divided, which provides a precise reference standard for the positioning of the palm position in the later stage, and improves the accuracy of positioning the palm position.
  • a position information acquiring unit 4 configured to: according to each pixel point in the connected area in the outer contour The maximum value of the shortest distance of the contour of the contour acquires the positional information of the palm of the palm.
  • the position information of the palm of the palm is obtained according to the maximum value of the shortest distance from each pixel point to the outer contour in the connected area in the outer contour, which conforms to the actual situation and ensures accurate positioning of the palm.
  • the location information obtaining unit 4 includes:
  • the shortest distance calculation subunit 41 is configured to calculate the shortest distance from each pixel point in the connected area in the outer contour to the outer contour.
  • the maximum value acquisition subunit 42 is configured to obtain the maximum value among the shortest distances.
  • the palm position determining sub-unit 43 is configured to determine the position of the pixel point corresponding to the maximum value as the palm position of the palm.
  • the information acquisition sub-unit 44 is configured to acquire location information of the palm.
  • the palm position positioning device of the palm in the embodiment further comprises a filling unit a for filling the hole when there is a hole in the communication region in the outer contour.
  • a filling unit a for filling the hole when there is a hole in the communication region in the outer contour.
  • the image including the palm and the arm is first acquired by the image acquisition unit 1; and the connected region of the palm and the arm in the image is acquired by the connected region acquisition unit 2 based on the skin color feature in the image;
  • the outer contour of the connected area is obtained by the outer contour acquiring unit 3, and the boundary of the connected area is clarified to eliminate the error; finally, the maximum value of the shortest distance from each pixel point to the outer contour in the connected area in the outer contour is obtained by the position information acquiring unit 4
  • the steps are simple and easy to operate. The rate is high.
  • This embodiment provides a gesture recognition method, as shown in FIG. 9, including the following steps:
  • the position information of the palm of the palm is obtained by the palm position positioning method of the palm in Embodiment 1.
  • the method for positioning the palm position of the palm in Embodiment 1 first acquires an image including the palm and the arm; and then acquires a connected region of the palm and the arm in the image based on the skin color feature in the image; and then acquires the outer contour of the connected region, and clarifies the connected region.
  • the limit is used to eliminate the error; finally, the position information of the palm of the palm can be obtained according to the maximum value of the shortest distance from each pixel point in the connected region in the outer contour to the outer contour.
  • the steps are simple and easy to perform, and the positioning accuracy is high. Thereby, the steps of gesture recognition are also simplified, and the efficiency of gesture recognition is improved.
  • the maximum radius is used to determine the inward circle of the palm.
  • the shortest distance from the pixel point corresponding to the palm to the outer contour should be the largest among all the shortest distances, so the center of the palm is taken as the center of the palm, and the maximum radius is used to obtain the palm inscribed that matches the area of the current palm.
  • the circle correspondingly, the change in the area value can also accurately reflect the change value of the palm area value of the palm. Since the radius of the inscribed circle of the palm is positively correlated with the area of the palm, and the area of the palm is positively correlated with the length of each finger, the length and distribution range of each finger can be estimated by the radius of the inscribed circle of the palm.
  • the area of the palm of the hand must be different, the corresponding area of the inward circle of the palm is naturally different, the radius of the corresponding inscribed circle of the palm is also different.
  • the inward circle of the palm that matches the area of the palm can be obtained by the above method, which provides a more accurate reference for the gesture recognition in the later stage.
  • Gesture recognition is performed according to the change value of the position information of the palm of the hand acquired in the predetermined time and/or the change value of the area value of the inscribed circle of the palm.
  • the motion trajectory of the hand is used as reference data for recognizing the gesture, only the change value of the position information of the palm of the palm in the predetermined time can be acquired, and the movement track of the palm can be acquired according to the change value of the position information of the palm.
  • the motion track is a circle representing a gesture
  • the motion track is an S shape representing another gesture, etc.
  • the recognition of the gesture is mainly based on identifying the current shape of the hand, it is possible to obtain only the inscribed circle of the palm within a predetermined time.
  • the change value of the area value according to which the change value of the palm area of the palm is obtained to identify whether the corresponding gesture is a five-finger open or a fist punch; and of course, the change value of the position information of the palm of the hand and the palm of the hand can be simultaneously acquired.
  • the value of the inscribed circle's area value is used for more complex gesture recognition. For example, a five-finger open circle represents a gesture, a five-finger open S-shape represents another gesture, and a fist-shaped circle represents a third gesture. Wait, generate more gesture control instructions.
  • the gesture recognition method in the embodiment first acquires an image including a palm and an arm; and then acquires a connected region of the palm and the arm in the image based on the skin color feature in the image; and then acquires an outer contour of the connected region, and clears the boundary of the connected region to eliminate Error; finally, according to the maximum value of the shortest distance from each pixel point in the connected region in the outer contour to the outer contour, the position information of the palm of the palm can be obtained.
  • the steps are simple and easy to perform, and the positioning accuracy is high. Thereby, the steps of gesture recognition are also simplified, and the efficiency of gesture recognition is improved.
  • This embodiment provides a gesture recognition apparatus, as shown in FIG. 10, including:
  • Position information acquisition unit 5 for adopting the palm position positioning method of the palm in Embodiment 1 Get the location information of the palm of your hand.
  • the palm inscribed circle determining unit 6 is configured to determine the inscribed circle of the palm with the maximum center as the center of the palm.
  • the gesture recognition unit 7 is configured to perform gesture recognition according to a change value of the position information of the palm of the hand acquired in the predetermined time and/or a change value of the area value of the inscribed circle of the palm.
  • the gesture recognition apparatus in this embodiment first acquires an image including a palm and an arm; and then acquires a connected area of the palm and the arm in the image based on the skin color feature in the image; and then acquires an outer contour of the connected area, and clears the boundary of the connected area to eliminate Error; finally, according to the maximum value of the shortest distance from each pixel point in the connected region in the outer contour to the outer contour, the position information of the palm of the palm can be obtained.
  • the steps are simple and easy to perform, and the positioning accuracy is high. Thereby, the steps of gesture recognition are also simplified, and the efficiency of gesture recognition is improved.
  • This embodiment provides an intelligent terminal, including but not limited to a smart phone, a smart TV, a tablet computer, a computer, and the like.
  • the smart terminal in this embodiment includes the palm position positioning device of the palm in Embodiment 2 and/or the gesture recognition device in Embodiment 4.
  • the palm position locating device first acquires an image including the palm and the arm through the image acquiring unit; and the connected region acquires the connected region of the palm and the arm in the image based on the skin color feature in the image;
  • the outer contour acquiring unit acquires the outer contour of the connected area, and clarifies the boundary of the connected area to eliminate the error;
  • the position information acquiring unit can acquire the palm according to the maximum value of the shortest distance from each pixel point to the outer contour in the connected area in the outer contour.
  • the location of the palm of the hand Simple steps and accurate positioning high.
  • the gesture recognition is also based on the position information of the palm obtained by the palm positioning method, and the gesture is recognized according to the change value of the position information of the palm of the palm acquired in the predetermined time and/or the change value of the area value of the inscribed circle of the palm.
  • the steps of gesture recognition are also simplified, and the efficiency of gesture recognition is improved.
  • This embodiment provides an intelligent terminal, including but not limited to a smart phone, a smart TV, a tablet computer, a computer, and the like.
  • the intelligent terminal in this embodiment includes an image capturing device and a palm position locating device of the palm in Embodiment 2.
  • An image capture device for capturing images including the palms and arms may be a camera mounted on the smart terminal.
  • the palm position locating device first acquires an image including the palm and the arm through the image acquiring unit; and the connected region acquires the connected region of the palm and the arm in the image based on the skin color feature in the image;
  • the outer contour acquiring unit acquires the outer contour of the connected area, and clarifies the boundary of the connected area to eliminate the error;
  • the position information acquiring unit can acquire the palm according to the maximum value of the shortest distance from each pixel point to the outer contour in the connected area in the outer contour.
  • the location of the palm of the hand The steps are simple and easy to perform, and the positioning accuracy is high.
  • This embodiment provides an intelligent terminal, including but not limited to a smart phone, a smart TV, a tablet computer, a computer, and the like.
  • the smart terminal in this embodiment includes an image capturing device and a gesture recognition device in Embodiment 4.
  • An image capture device for capturing images including the palms and arms may be a camera mounted on the smart terminal.
  • the gesture recognition device first acquires an image including a palm and an arm; and then acquires a connected region of the palm and the arm in the image based on the skin color feature in the image; and then acquires an outer contour of the connected region, and clears the connected region.
  • the limit is used to eliminate the error; finally, the position information of the palm of the palm can be obtained according to the maximum value of the shortest distance from each pixel point in the connected region in the outer contour to the outer contour.
  • the steps are simple and easy to perform, and the positioning accuracy is high. Thereby, the steps of gesture recognition are also simplified, and the efficiency of gesture recognition is improved.
  • the present invention provides a palm position locating device for a palm, comprising: one or more processors 200; a memory 100; one or more programs, the one or more programs being stored in the memory 100, when When the one or more processors 200 are executed, the following operations are performed: acquiring an image including a palm and an arm; acquiring a connected region of the palm and the arm in the image based on the skin color feature in the image; acquiring an outer portion of the connected region a contour; acquiring position information of a palm of the palm according to a maximum value of a shortest distance from each pixel point in the connected area in the outer contour to the outer contour.
  • a processor 200 may be included, and as shown in FIG. 12, two processors 200 may be included.
  • the step of acquiring the position information of the palm of the palm according to the maximum value of the shortest distance from each pixel point in the connected area in the outer contour to the outer contour further comprises: a hole exists in the connected area in the outer contour, and the hole is filled Charge.
  • the present invention further provides a gesture recognition apparatus comprising: one or more processors 400; a memory 300; one or more programs, the one or more programs being stored in the memory 300 when Or when the plurality of processors 400 are executed, performing the following operations: the palm position positioning method of the palm acquires position information of the palm of the palm; determining the inscribed circle of the palm with the maximum radius as the center of the palm; The change value of the position information of the palm of the hand acquired in the predetermined time and/or the change value of the area value of the inscribed circle of the palm is gesture-recognized.
  • a processor 400 may be included, and as shown in FIG. 14, two processors 400 may be included.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种手掌的掌心位置定位、手势识别方法、装置及智能终端,其中手掌的掌心位置定位方法先获取包括手掌和手臂的图像(S1);再基于图像中的肤色特征获取图像中手掌和手臂的连通区域(S2);之后获取连通区域的外部轮廓(S3),明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了(S4)。步骤简单易行,定位准确率高。

Description

手掌的掌心位置定位、手势识别方法、装置及智能终端
本申请要求在2016年03月25日提交中国专利局、申请号为201610177407.0、发明名称为“手掌的掌心位置定位、手势识别方法、装置及智能终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机视觉与图像处理技术领域,具体涉及一种手掌的掌心位置定位、手势识别方法、装置及智能终端。
背景技术
随着计算机技术的飞速发展,以触摸、语音、手势、体感等为主的新型交互方式已经成为近年来研究的热点,使得以人为中心的人机交互技术具有广泛的应用前景。其中手势识别是人机交互领域的一个重要研究方向,对构建智能的人机交互方式发挥着重要的作用。但由于手势具有复杂性、多样性、多变性和时空差异性等特点,再加上光照、温度等外界因素的干扰,使得手势识别在技术上还存在很多困难问题,成为了人机交互领域的一个富有挑战性的研究课题。
发明人在实现本发明的过程中发现:因为手指的形状比较细长,在图像中难以准确识别,相较于手指,手掌则较为宽大,如果基于手掌来对手势进行识别,显然会降低手势识别的难度。手掌识别的重点在于对掌心位置进行定位,在现有技术中常采用三角形增量法来对掌心位置进行定位, 首先要获取手势轮廓图像,再对手势轮廓的凸包进行凸缺陷检测来获取凸缺陷深度点作为三角形增量法的点集,之后以点集中任选的两个凸缺陷深度点的距离为直径,这两个凸缺陷点距离的中心为圆心,构造一个圆,判断这个圆是否包括所有的凸缺陷深度点,如果包括,才会将该圆作为手掌内接圆对掌心位置进行定位;如果不包括,还要再任意选取该圆外的一个凸缺陷深度点,之后判断这三个凸缺陷深度点构成的三角形的形状是否能构成直角三角形或者钝角三角形,如果能,再将与直角或者钝角相对的两个凸缺陷深度点根据上述方法重新构造一个圆判断这个圆是否包括所有的凸缺陷深度点,如果包括,才会将该圆作为手掌内接圆对掌心位置进行定位;如果不包括,还要不断重复上述操作,直至构造出一个能够包括所有的凸缺陷深度点的圆作为手掌内接圆对掌心位置进行定位;如果三个凸缺陷深度点构成的三角形的形状是锐角三角形,则步骤就更为复杂了,先要根据这三个凸缺陷深度点形成的锐角三角形构造一个外接圆,再判断该圆是否包括所有的凸缺陷深度点,如果包括,才会将该圆作为手掌内接圆对掌心位置进行定位;如果不包括,还要重新选取凸缺陷深度点不断重复上述操作,直至构造出一个能够包括所有的凸缺陷深度点的圆作为手掌内接圆。可见,现有技术中对掌心位置进行定位的方法不仅步骤复杂,而且任何一个环节出现误差,都会导致定位不准确。
发明内容
因此,本发明要解决的技术问题在于克服现有技术中的掌心位置定位方法步骤复杂,准确率低,从而提供一种简单易行、准确率高的手掌的掌心位置定位、手势识别方法、装置及智能终端。
为此,本发明提供了如下技术方案:
本发明提供了一种手掌的掌心位置定位方法,包括如下步骤:
获取包括手掌和手臂的图像;
基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;
获取所述连通区域的外部轮廓;
根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。
上述方法,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息的步骤之前还包括:
若所述外部轮廓内连通区域存在孔洞,对所述孔洞进行填充。
上述方法,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息,包括:
计算所述外部轮廓内连通区域中的每一个像素点到所述外部轮廓的最短距离;
获取所述最短距离中的最大值;
将所述最大值对应的像素点所在的位置确定为手掌的掌心位置;
获取所述掌心的位置信息。
本发明还提供了一种手掌的掌心位置定位装置,包括:
图像获取单元,用于获取包括手掌和手臂的图像;
连通区域获取单元,用于基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;
外部轮廓获取单元,用于获取所述连通区域的外部轮廓;
位置信息获取单元,用于根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。
上述装置,还包括:
填充单元,用于在所述外部轮廓内连通区域存在孔洞时对所述孔洞进行填充。
本发明还提供了一种手势识别方法,包括如下步骤:
采用上述手掌的掌心位置定位方法获取手掌掌心的位置信息;
以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;
根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
本发明还提供了一种手势识别装置,包括:
位置信息获取单元,用于采用上述手掌的掌心位置定位方法获取手掌掌心的位置信息;
手掌内接圆确定单元,用于以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;
手势识别单元,用于根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
本发明还提供了一种智能终端,包括上述手掌的掌心位置定位装置和/或上述手势识别装置。
本发明还提供了一种智能终端,包括图像采集装置和上述手掌的掌心位置定位装置;
所述图像采集装置,用于采集包括手掌和手臂的图像。
本发明还提供了一种智能终端,包括图像采集装置和上述手势识别装置;
所述图像采集装置,用于采集包括手掌和手臂的图像。
本发明技术方案,具有如下优点:
本发明提供了一种手掌的掌心位置定位方法及装置,先获取包括手掌和手臂的图像;再基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。
本发明还提供了一种手势识别方法及装置,采用上述手掌的掌心位置定位方法获取手掌掌心的位置信息;以掌心为圆心,以最大值为半径确定手掌内接圆;根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。因为对手掌的掌心位置定位 的步骤简单易行,定位准确率高。从而也简化了手势识别的步骤,提升了手势识别的效率。
本发明提供了一种手掌的掌心位置定位装置,包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:获取包括手掌和手臂的图像;基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;获取所述连通区域的外部轮廓;根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。
所述的装置,其中,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息的步骤之前还包括:若所述外部轮廓内连通区域存在孔洞,对所述孔洞进行填充。
本发明又提供了一种手势识别装置,包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:所述的手掌的掌心位置定位方法获取手掌掌心的位置信息;以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
附图说明
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例1中手掌的掌心位置定位方法的一个具体实例的流程图;
图2为本发明实施例1中手掌的掌心位置定位方法中获取的包括手掌和手臂的图像的一个具体实例的示意图;
图3为本发明实施例1中手掌的掌心位置定位方法中手掌和手臂的连通区域的一个具体实例的示意图;
图4为本发明实施例1手掌的掌心位置定位方法中获取掌心的位置信息的一个具体实例的流程图;
图5为本发明实施例1中手掌的掌心位置定位方法的一个优选的具体实例的流程图;
图6为本发明实施例1中手掌的掌心位置定位方法中存在孔洞的连通区域的一个具体实例的示意图;
图7为本发明实施例1中手掌的掌心位置定位方法中对存在孔洞的连通区域填充后的效果示意图;
图8为本发明实施例2中手掌的掌心位置定位装置的一个具体实例的 结构框图;
图9为本发明实施例3中手势识别方法的一个具体实例的流程图;
图10为本发明实施例4中手势识别装置的一个具体实例的结构框图。
图11为本发实施例8的具有一个处理器的掌心位置定位装置的结构示意图;
图12为本发实施例8的具有二个处理器的掌心位置定位装置的结构示意图;
图13为本发实施例9的具有一个处理器的手势识别装置的结构示意图;
图14为本发实施例9的具有二个处理器的手势识别装置的结构示意图;
附图标记:
1-图像获取单元;2-连通区域获取单元;3-外部轮廓获取单元;4-位置信息获取单元;a-填充单元;41-最短距离计算子单元;42-最大值获取子单元;43-掌心位置确定子单元;44-信息获取子单元;5-位置信息获取单元;6-手掌内接圆确定单元;7-手势识别单元。
具体实施方式
下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得 的所有其他实施例,都属于本发明保护的范围。
在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,还可以是两个元件内部的连通,可以是无线连接,也可以是有线连接。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。
此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
实施例1
本实施例提供了一种手掌的掌心位置定位方法,如图1所示,包括如下步骤:
S1.获取包括手掌和手臂的图像。具体地,当使用者在具有摄像、拍照等功能的装置,比如摄像头的拍摄范围内做出手势动作后,就可以拍摄到包括使用者手掌和手臂的图像并传输至存储装置存储了,从该存储装置中就可以获取到上述包括手掌和手臂的图像了。图2即为一幅包括手掌和手 臂的图像。
S2.基于图像中的肤色特征获取图像中手掌和手臂的连通区域。具体地,以图2所示包括手掌和手臂的图像为例,可以将该图像转化到HSV或YCrCb色彩空间,然后根据肤色特征判断图像中的每一个像素点属于皮肤还是非皮肤,据此获取到图像中手掌和手臂的连通区域,如图3所示。
S3.获取连通区域的外部轮廓。具体地,可以利用OpenCV(Open Source Computer Vision Library)中的cvFindContours函数来获取连通区域的外部轮廓。通过获取连通区域的外部轮廓,为后期定位掌心位置提供了一个参照,优选地,可以采用基于轮廓跟踪的二值图像去毛刺算法进一步去除该外部轮廓上的毛刺,对手掌和手臂的连通区域的边界进行准确划分,以进一步提升对掌心位置定位的精准度。
S4.根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值获取手掌的掌心的位置信息。具体地,根据掌心的位置特征可知,掌心所在位置对应的像素点到外部轮廓的最短距离应该是最大的,根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值获取手掌的掌心的位置信息,符合实际情况,确保了对掌心的准确定位。
优选地,如图4所示,步骤S4包括:
S41.计算外部轮廓内连通区域中的每一个像素点到外部轮廓的最短距离。具体地,以其中的某一个像素点为例,要先遍历外部轮廓上所有的轮廓点,计算该像素点到每个轮廓点间的距离,遍历完后再比对该像素点到每个轮廓点间的距离,就可以从中选取出该像素点到外部轮廓的最短距离 了。采用上述方法遍历完外部轮廓内连通区域中的每一个像素点后,就可以获取到每一个像素点到外部轮廓的最短距离了。通过计算每个像素点到外部轮廓的最短距离,能够事先把手掌区域内的像素点到手指外部轮廓的距离或者手指区域内的像素点到手掌外部轮廓的距离予以剔除,降低了误判率。
S42.获取最短距离中的最大值。具体地,通过比对每个像素点间的最短距离,就可以获取到最短距离中的最大值是多少了。
S43.将最大值对应的像素点所在的位置确定为手掌的掌心位置。具体地,掌心到手掌边缘的最短距离应该是最大的,因此最短距离中的最大值对应的像素点即为手掌的掌心位置。
S44.获取掌心的位置信息。具体地,可以根据需求,通过掌心对应的像素点在手掌和手臂的连通区域所处的位置,获取掌心在手势中所处的位置,据此估计出当前的手型;通过获取同一个手掌在预定时间内的多个掌心对应的像素点的坐标,可以获取掌心的移动轨迹等等。因此只要确定了掌心位置对应的像素点,可以获取掌心所处的位置、运动轨迹等多个掌心的位置信息,为后期手势识别提供数据支持。
优选地,如图5所示,步骤S4之前还包括:
Sa.若外部轮廓内连通区域存在孔洞,对孔洞进行填充。具体地,如图6所示,基于肤色检测得到的手掌和手臂的连通区域有可能会存在孔洞,当孔洞位于手掌中心区域附近时,有可能恰好掌心所在的位置对应的像素点是缺失的,导致手掌的掌心定位出现偏移,不利于后期手势的准确识别。 通过对孔洞进行填充,确保了掌心所在位置对应的像素点不会出现缺失,降低了对掌心位置进行定位的误差率。实际应用中,可以利用OpenCV(Open Source Computer Vision Library)中的cvDrawContours函数来对孔洞进行填充,填充后的效果如图7所示,能够达到很好的填充效果。当然也可以采用区域生长法,以外部轮廓为边界,以外部轮廓内的任意一个像素点为种子点进行区域生长,就可以对外部轮廓内的孔洞进行填充了。
本实施例中的手掌的掌心位置定位方法,先获取包括手掌和手臂的图像;再基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。
实施例2
本实施例提供了一种手掌的掌心位置定位装置,如图8所示,包括:
图像获取单元1,用于获取包括手掌和手臂的图像。
连通区域获取单元2,用于基于图像中的肤色特征获取图像中手掌和手臂的连通区域。
外部轮廓获取单元3,用于获取连通区域的外部轮廓。通过获取连通区域的外部轮廓,能够对手掌和手臂的连通区域的边界进行准确划分,为后期定位掌心位置提供了精准的参照标准,提升了对掌心位置定位的精准度。
位置信息获取单元4,用于根据外部轮廓内连通区域中的各像素点到外 部轮廓的最短距离的最大值获取手掌的掌心的位置信息。根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值获取手掌的掌心的位置信息,符合实际情况,确保了对掌心的准确定位。
优选地,位置信息获取单元4包括:
最短距离计算子单元41,用于计算外部轮廓内连通区域中的每一个像素点到外部轮廓的最短距离。
最大值获取子单元42,用于获取最短距离中的最大值。
掌心位置确定子单元43,用于将最大值对应的像素点所在的位置确定为手掌的掌心位置。
信息获取子单元44,用于获取掌心的位置信息。
优选地,本实施例中的手掌的掌心位置定位装置,还包括填充单元a,用于在外部轮廓内连通区域存在孔洞时对孔洞进行填充。通过对孔洞进行填充,确保了掌心所在位置对应的像素点不会出现缺失,降低了对掌心位置进行定位的误差率。
本实施例中的手掌的掌心位置定位装置,通过图像获取单元1先获取包括手掌和手臂的图像;再通过连通区域获取单元2基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后通过外部轮廓获取单元3获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后通过位置信息获取单元4根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准 确率高。
实施例3
本实施例提供了一种手势识别方法,如图9所示,包括如下步骤:
Y1.采用实施例1中的手掌的掌心位置定位方法获取手掌掌心的位置信息。实施例1中的手掌的掌心位置定位方法,先获取包括手掌和手臂的图像;再基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。从而也简化了手势识别的步骤,提升了手势识别的效率。
Y2.以掌心为圆心,以最大值为半径确定手掌内接圆。具体地,掌心对应的像素点到外部轮廓的最短距离在所有最短距离中应该是最大的,因此以掌心为圆心,以该最大值为半径能够获取到与当前手掌的面积较为匹配的手掌内接圆,相应地,其面积值的变化值也能够准确反映手掌的掌心面积值的变化值。因手掌内接圆的半径与手掌的面积值正相关,而手掌的面积值又与各个手指的长度正相关,因此通过手掌内接圆的半径就可以估计出各个手指的长度和分布范围。比如五指张开、握拳等不同的手势,其手掌的面积值必然不同,对应的手掌内接圆的面积值自然也有差别,相应的手掌内接圆的半径也是不同的。综上所述,通过上述方式能够获取到与手掌的面积值较为匹配的手掌内接圆,为后期进行手势识别提供了较为准确的参考依据。
Y3.根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。具体地,如果将手的运动轨迹作为用于识别手势的参考数据,可以只获取预定时间内手掌掌心的位置信息的变化值,根据掌心的位置信息的变化值,就可以获取掌心的运动轨迹了,比如运动轨迹为圆形代表一种手势,运动轨迹为S形代表另一种手势等等;如果对手势的识别主要是基于识别手当前的形态,则可以只获取预定时间内手掌内接圆的面积值的变化值,据此获取手掌的掌心面积的变化值以识别对应的手势是五指张开还是握拳等形态;当然也可以同时获取预定时间内的手掌掌心的位置信息的变化值和手掌内接圆的面积值的变化值,进行更为复杂的手势识别,比如五指张开画圆圈代表一种手势,五指张开画S形代表另一种手势,握拳画圆圈代表第三种手势等等,生成更多的手势控制指令。
本实施例中的手势识别方法,先获取包括手掌和手臂的图像;再基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。从而也简化了手势识别的步骤,提升了手势识别的效率。
实施例4
本实施例提供了一种手势识别装置,如图10所示,包括:
位置信息获取单元5,用于采用实施例1中的手掌的掌心位置定位方法 获取手掌掌心的位置信息。
手掌内接圆确定单元6,用于以掌心为圆心,以最大值为半径确定手掌内接圆。
手势识别单元7,用于根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
本实施例中的手势识别装置,先获取包括手掌和手臂的图像;再基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。从而也简化了手势识别的步骤,提升了手势识别的效率。
实施例5
本实施例提供了一种智能终端,包括但不限于智能手机、智能电视、平板电脑、计算机等。本实施例中的智能终端,包括实施例2中的手掌的掌心位置定位装置和/或实施例4中的手势识别装置。
本实施例中的智能终端,其掌心位置定位装置通过图像获取单元先获取包括手掌和手臂的图像;再通过连通区域获取单元基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后通过外部轮廓获取单元获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后通过位置信息获取单元根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率 高。而手势识别也是基于采用上述掌心定位方法获取的掌心的位置信息,根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别的,从而也简化了手势识别的步骤,提升了手势识别的效率。
实施例6
本实施例提供了一种智能终端,包括但不限于智能手机、智能电视、平板电脑、计算机等。本实施例中的智能终端,包括图像采集装置和实施例2中的手掌的掌心位置定位装置。
图像采集装置,用于采集包括手掌和手臂的图像。具体地,图像采集装置可以为安装于智能终端的摄像头。
本实施例中的智能终端,其掌心位置定位装置通过图像获取单元先获取包括手掌和手臂的图像;再通过连通区域获取单元基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后通过外部轮廓获取单元获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后通过位置信息获取单元根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。
实施例7
本实施例提供了一种智能终端,包括但不限于智能手机、智能电视、平板电脑、计算机等。本实施例中的智能终端,包括图像采集装置和实施例4中的手势识别装置。
图像采集装置,用于采集包括手掌和手臂的图像。具体地,图像采集装置可以为安装于智能终端的为摄像头。
本实施例中的智能终端,手势识别装置,先获取包括手掌和手臂的图像;再基于图像中的肤色特征获取图像中手掌和手臂的连通区域;之后获取连通区域的外部轮廓,明确连通区域的界限以消除误差;最后根据外部轮廓内连通区域中的各像素点到外部轮廓的最短距离的最大值就可以获取手掌的掌心的位置信息了。步骤简单易行,定位准确率高。从而也简化了手势识别的步骤,提升了手势识别的效率。
实施例8
本发明提供了一种手掌的掌心位置定位装置,包括:一个或者多个处理器200;存储器100;一个或者多个程序,所述一个或者多个程序存储在所述存储器100中,当被所述一个或者多个处理器200执行时,进行如下操作:获取包括手掌和手臂的图像;基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;获取所述连通区域的外部轮廓;根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。具体为如图11所示可以包括一个处理器200,如图12所示可以包括二个处理器200。
所述的装置,其中,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息的步骤之前还包括:若所述外部轮廓内连通区域存在孔洞,对所述孔洞进行填 充。
实施例9
本发明又提供了一种手势识别装置,包括:一个或者多个处理器400;存储器300;一个或者多个程序,所述一个或者多个程序存储在所述存储器300中,当被所述一个或者多个处理器400执行时,进行如下操作:所述的手掌的掌心位置定位方法获取手掌掌心的位置信息;以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。具体为如图13所示可以包括一个处理器400,如图14所示可以包括二个处理器400。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流 程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。

Claims (13)

  1. 一种手掌的掌心位置定位方法,其特征在于,包括如下步骤:
    获取包括手掌和手臂的图像;
    基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;
    获取所述连通区域的外部轮廓;
    根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息的步骤之前还包括:
    若所述外部轮廓内连通区域存在孔洞,对所述孔洞进行填充。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息,包括:
    计算所述外部轮廓内连通区域中的每一个像素点到所述外部轮廓的最短距离;
    获取所述最短距离中的最大值;
    将所述最大值对应的像素点所在的位置确定为手掌的掌心位置;
    获取所述掌心的位置信息。
  4. 一种手掌的掌心位置定位装置,其特征在于,包括:
    图像获取单元(1),用于获取包括手掌和手臂的图像;
    连通区域获取单元(2),用于基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;
    外部轮廓获取单元(3),用于获取所述连通区域的外部轮廓;
    位置信息获取单元(4),用于根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。
  5. 根据权利要求4所述的装置,其特征在于,还包括:
    填充单元(a),用于在所述外部轮廓内连通区域存在孔洞时对所述孔洞进行填充。
  6. 一种手势识别方法,其特征在于,包括如下步骤:
    采用权利要求1-3任一项所述的手掌的掌心位置定位方法获取手掌掌心的位置信息;
    以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;
    根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
  7. 一种手势识别装置,其特征在于,包括:
    位置信息获取单元(5),用于采用权利要求1-3任一项所述的手掌的 掌心位置定位方法获取手掌掌心的位置信息;
    手掌内接圆确定单元(6),用于以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;
    手势识别单元(7),用于根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
  8. 一种智能终端,其特征在于,包括权利要求4或5所述的手掌的掌心位置定位装置和/或权利要求7所述的手势识别装置。
  9. 一种智能终端,其特征在于,包括图像采集装置和权利要求4或5所述的手掌的掌心位置定位装置;
    所述图像采集装置,用于采集包括手掌和手臂的图像。
  10. 一种智能终端,其特征在于,包括图像采集装置和权利要求7所述的手势识别装置;
    所述图像采集装置,用于采集包括手掌和手臂的图像。
  11. 一种手掌的掌心位置定位装置,其特征在于,包括:
    一个或者多个处理器;
    存储器;
    一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:
    获取包括手掌和手臂的图像;
    基于所述图像中的肤色特征获取所述图像中手掌和手臂的连通区域;
    获取所述连通区域的外部轮廓;
    根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息。
  12. 根据权利要求11所述的装置,其特征在于,所述根据所述外部轮廓内连通区域中的各像素点到所述外部轮廓的最短距离的最大值获取所述手掌的掌心的位置信息的步骤之前还包括:若所述外部轮廓内连通区域存在孔洞,对所述孔洞进行填充。
  13. 一种手势识别装置,其特征在于,包括:
    一个或者多个处理器;
    存储器;
    一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:
    采用权利要求1-3任一项所述的手掌的掌心位置定位方法获取手掌掌心的位置信息;
    以所述掌心为圆心,以所述最大值为半径确定手掌内接圆;
    根据预定时间内所获取的手掌掌心的位置信息的变化值和/或手掌内接圆的面积值的变化值进行手势识别。
PCT/CN2016/089380 2016-03-25 2016-07-08 手掌的掌心位置定位、手势识别方法、装置及智能终端 WO2017161778A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/245,159 US20170277944A1 (en) 2016-03-25 2016-08-23 Method and electronic device for positioning the center of palm

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610177407.0 2016-03-25
CN201610177407.0A CN105825193A (zh) 2016-03-25 2016-03-25 手掌的掌心位置定位、手势识别方法、装置及智能终端

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/245,159 Continuation US20170277944A1 (en) 2016-03-25 2016-08-23 Method and electronic device for positioning the center of palm

Publications (1)

Publication Number Publication Date
WO2017161778A1 true WO2017161778A1 (zh) 2017-09-28

Family

ID=56525238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089380 WO2017161778A1 (zh) 2016-03-25 2016-07-08 手掌的掌心位置定位、手势识别方法、装置及智能终端

Country Status (2)

Country Link
CN (1) CN105825193A (zh)
WO (1) WO2017161778A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857244A (zh) * 2017-11-30 2019-06-07 百度在线网络技术(北京)有限公司 一种手势识别方法、装置、终端设备、存储介质及vr眼镜
CN111291749A (zh) * 2020-01-20 2020-06-16 深圳市优必选科技股份有限公司 手势识别方法、装置及机器人
CN117455940A (zh) * 2023-12-25 2024-01-26 四川汉唐云分布式存储技术有限公司 基于云值守的顾客行为检测方法、系统、设备及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503619B (zh) * 2016-09-23 2020-06-19 南京理工大学 基于bp神经网络的手势识别方法
CN108230328B (zh) * 2016-12-22 2021-10-22 新沂阿凡达智能科技有限公司 获取目标对象的方法、装置和机器人
CN106980828B (zh) * 2017-03-17 2020-06-19 深圳市魔眼科技有限公司 在手势识别中确定手掌区域的方法、装置及设备
CN107589850A (zh) * 2017-09-26 2018-01-16 深圳睛灵科技有限公司 一种手势移动方向的识别方法及系统
CN108748139A (zh) * 2018-04-18 2018-11-06 四川文理学院 基于体感式的机器人控制方法及装置
CN108921129B (zh) * 2018-07-20 2021-05-14 杭州易现先进科技有限公司 图像处理方法、系统、介质和电子设备
CN110533714A (zh) * 2019-08-21 2019-12-03 合肥晌玥科技有限公司 基于图像处理技术检测目标物体最大内接圆的方法和系统
CN111309149B (zh) * 2020-02-21 2022-08-19 河北科技大学 一种手势识别方法及手势识别装置
CN111626168B (zh) * 2020-05-20 2022-12-02 中移雄安信息通信科技有限公司 手势识别方法、装置、设备和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722701A (zh) * 2012-05-24 2012-10-10 清华大学 指纹采集过程中的视觉监控方法及其设备
CN104102347A (zh) * 2014-07-09 2014-10-15 东莞万士达液晶显示器有限公司 指尖定位方法及指尖定位终端
CN104899600A (zh) * 2015-05-28 2015-09-09 北京工业大学 一种基于深度图的手部特征点检测方法
CN105138990A (zh) * 2015-08-27 2015-12-09 湖北师范学院 一种基于单目摄像头的手势凸包检测与掌心定位方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081918B (zh) * 2010-09-28 2013-02-20 北京大学深圳研究生院 一种视频图像显示控制方法及视频图像显示器

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722701A (zh) * 2012-05-24 2012-10-10 清华大学 指纹采集过程中的视觉监控方法及其设备
CN104102347A (zh) * 2014-07-09 2014-10-15 东莞万士达液晶显示器有限公司 指尖定位方法及指尖定位终端
CN104899600A (zh) * 2015-05-28 2015-09-09 北京工业大学 一种基于深度图的手部特征点检测方法
CN105138990A (zh) * 2015-08-27 2015-12-09 湖北师范学院 一种基于单目摄像头的手势凸包检测与掌心定位方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857244A (zh) * 2017-11-30 2019-06-07 百度在线网络技术(北京)有限公司 一种手势识别方法、装置、终端设备、存储介质及vr眼镜
CN109857244B (zh) * 2017-11-30 2023-09-01 百度在线网络技术(北京)有限公司 一种手势识别方法、装置、终端设备、存储介质及vr眼镜
CN111291749A (zh) * 2020-01-20 2020-06-16 深圳市优必选科技股份有限公司 手势识别方法、装置及机器人
CN111291749B (zh) * 2020-01-20 2024-04-23 深圳市优必选科技股份有限公司 手势识别方法、装置及机器人
CN117455940A (zh) * 2023-12-25 2024-01-26 四川汉唐云分布式存储技术有限公司 基于云值守的顾客行为检测方法、系统、设备及存储介质
CN117455940B (zh) * 2023-12-25 2024-02-27 四川汉唐云分布式存储技术有限公司 基于云值守的顾客行为检测方法、系统、设备及存储介质

Also Published As

Publication number Publication date
CN105825193A (zh) 2016-08-03

Similar Documents

Publication Publication Date Title
WO2017161778A1 (zh) 手掌的掌心位置定位、手势识别方法、装置及智能终端
WO2018028649A1 (zh) 一种移动装置及其定位方法、计算机存储介质
JP2021522564A (ja) 非制約環境において人間の視線及びジェスチャを検出するシステムと方法
CN110502104A (zh) 利用深度传感器进行无接触操作的装置
CN105373785A (zh) 基于深度神经网络的手势识别检测方法与装置
KR102329761B1 (ko) 외부 장치의 선택 및 제어를 위한 전자 장치와 그의 동작 방법
JP2017523487A (ja) 適応ホモグラフィ写像に基づく視線追跡
WO2019033576A1 (zh) 人脸姿态检测方法、装置及存储介质
US20130154947A1 (en) Determining a preferred screen orientation based on known hand positions
JP2007316882A (ja) 遠隔操作装置及び方法
CN102810015B (zh) 基于空间运动的输入方法及终端
TWI431538B (zh) 基於影像之動作手勢辨識方法及系統
US9069415B2 (en) Systems and methods for finger pose estimation on touchscreen devices
US11546982B2 (en) Systems and methods for determining lighting fixture arrangement information
CN104102347A (zh) 指尖定位方法及指尖定位终端
CN109839827B (zh) 一种基于全空间位置信息的手势识别智能家居控制系统
WO2022002262A1 (zh) 基于计算机视觉的字符序列识别方法、装置、设备和介质
CN107450717B (zh) 一种信息处理方法及穿戴式设备
CN104615231B (zh) 一种输入信息的确定方法和设备
CN114092985A (zh) 一种终端控制方法、装置、终端及存储介质
CN115480511A (zh) 一种机器人交互方法、装置、存储介质及设备
WO2022001505A1 (zh) 可穿戴设备的自动切换nfc卡的方法、装置及可穿戴设备
WO2021244650A1 (zh) 控制方法、装置、终端及存储介质
WO2016145827A1 (zh) 终端的控制方法及装置
CN104063041A (zh) 一种信息处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16895128

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16895128

Country of ref document: EP

Kind code of ref document: A1