CN104834377A - Audio control method based on 3D (3-Dimensional) gesture recognition - Google Patents
Audio control method based on 3D (3-Dimensional) gesture recognition Download PDFInfo
- Publication number
- CN104834377A CN104834377A CN201510222339.0A CN201510222339A CN104834377A CN 104834377 A CN104834377 A CN 104834377A CN 201510222339 A CN201510222339 A CN 201510222339A CN 104834377 A CN104834377 A CN 104834377A
- Authority
- CN
- China
- Prior art keywords
- audio
- axis coordinate
- gesture identification
- audio frequency
- control method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses an audio control method based on 3D (3-Dimensional) gesture recognition. The audio control method based on 3D gesture recognition comprises the following steps: S1) acquiring electric field data in a gesture recognition area; S2) establishing a spatial 3D coordinate system in the gesture recognition area; S3) acquiring position coordinates of an electric field change area in the gesture recognition area; S4) repeating step S3 to obtain dynamic change data of the position coordinates of the electric field change area; S5) adjusting audio volume in real time according to the dynamic change data of an X-axis coordinate; S6) controlling audio switching in real time according to the dynamic change data of a Y-axis coordinate; S7) controlling audio play and pause in real time according to the dynamic change data of a Z-axis coordinate. By recognizing gesture actions in a 3D space, operations such as audio play/pause, volume adjustment and audio switching on smart devices are realized, and the audio control method based on 3D gesture recognition has the characteristics of naturalness, simplicity, novelty and the like.
Description
Technical field
The invention belongs to embedded software technology field, be specifically related to a kind of design of the audio control method based on 3D gesture identification.
Background technology
In the reciprocal process of user and smart machine, input mode seems particularly important, and input mode can strengthen the experience effect of user easily.In prior art, on smart machine, voice-operated input mode is generally adopted as input through keyboard or touches input.On the one hand, these two kinds of input modes are ripe and stable implementations, have substantially been easily accepted by a user; On the other hand, these two kinds of input modes lack certain novelty, are difficult to realize the personalized customization of user to smart machine.
Recent years; along with the fast development of computer technology; the novel human-computer interaction technology that research meets interpersonal communication custom becomes Showed Very Brisk; also make encouraging progress, these researchs comprise recognition of face, human facial expression recognition, labiomaney, head movement tracking, stare tracking, gesture identification and body posture identification etc.Generally speaking. from progressively transferring to centered by computing machine, focus be put on man for human-computer interaction technology, is the interaction technique of multimedia, various modes.
Gesture refers under the consciousness domination of people, and all kinds of actions that staff is made, if digital flexion, stretching, extension and hand are in the motion etc. in space, can be perform a certain task, also can be and the exchanging, to express certain implication or intention of people.Gesture be a kind of natural, directly perceived, be easy to learn man-machine interaction means, using staff directly as the input equipment of computing machine, the communication of between humans and machines will no longer need middle media, and user can define the machine of a kind of suitable gesture to surrounding simply and control.Using staff directly as input medium compared with other input mode, there is naturality, terseness, rich and direct feature.
Summary of the invention
The object of the invention is to lack certain novelty to solve in prior art voice-operated input mode on smart machine, being difficult to realize the problem of user to the personalized customization of smart machine, proposing a kind of audio control method based on 3D gesture identification.
Technical scheme of the present invention is: a kind of audio control method based on 3D gesture identification, comprises the following steps:
S1, the electric field data obtained in gesture identification region;
S2, in gesture identification region, set up space 3D coordinate system;
The position coordinates of S3, acquisition gesture identification region internal electric field region of variation;
S4, repetition step S3, obtain the dynamic changing data of electric field change regional location coordinate;
S5, adjust the volume of audio frequency in real time according to the dynamic changing data of X-axis coordinate;
S6, control the switching between audio frequency in real time according to the dynamic changing data of Y-axis coordinate;
S7, control broadcasting and the time-out of audio frequency in real time according to the dynamic changing data of Z axis coordinate.
Further, step S2 specifically comprises step by step following:
S21, selected a bit as true origin in gesture identification region;
S22, determine the positive dirction of X-axis, Y-axis and Z axis, set up space 3D coordinate system.
Further, step S5 specifically comprises step by step following:
S51, the variable quantity of setting X-axis coordinate data and the corresponding relation of volume variable quantity;
The acquisition time interval delta T of S52, setting X-axis coordinate data
x;
S53, calculate each acquisition time interval delta T according to formula (1)
xthe variable quantity of interior X-axis coordinate data:
ΔX
n=X
n-X
n-1(n=1,2,3…) (1);
S54, to adjust in real time according to the volume of the corresponding relation set in step S51 to audio frequency.
Further, step S6 specifically comprises step by step following:
The acquisition time interval delta T of S61, setting Y-axis coordinate data
y;
S62, calculate each acquisition time interval delta T according to formula (2)
ythe variable quantity of interior Y-axis coordinate data:
ΔY
n=Y
n-Y
n-1(n=1,2,3…) (2);
S63, setting audio handover trigger threshold value Y
maxwith Y
min;
S64, by the variation delta Y of Y-axis coordinate data
nrespectively with Y
maxand Y
mincompare,
If Δ Y
n>=Y
max, then the next audio frequency in audio playlist is switched to;
If Δ Y
n<=Y
min, then the upper audio frequency in audio playlist is switched to;
If Y
min< Δ Y
n<Y
max, then continue to play present video.
Further, Y
maxvalue is just, Y
minvalue is negative.
Further, step S7 specifically comprises step by step following:
Activation threshold value Z is clicked in S71, definition
m;
Trigger condition is clicked in S72, definition: when first Z axis coordinate data reduces, and reduction exceedes and clicks activation threshold value Z
m, Z axis coordinate data increases again subsequently, and recruitment exceedes and clicks activation threshold value Z
m, be then defined as triggering and once click, note clicks times N
z=1;
Number of times determination time interval delta T is clicked in S73, setting
z;
S74, basis click number of times determination time interval delta T
zinterior clicks times N
zthe broadcasting of real-time control audio frequency and time-out:
If N
z=1, then audio plays;
If N
z=2, then suspend audio frequency;
If N
z≠ 1 and N
z≠ 2, then keep audio frequency current state.
The invention has the beneficial effects as follows: the present invention is by the identification to gesture motion in 3d space, achieve the operations such as the switching between the broadcasting/time-out to audio frequency on smart machine, volume adjustment and audio frequency, the personalized customization function of product can be realized, there is the features such as naturality, terseness, novelty.
Accompanying drawing explanation
Fig. 1 is a kind of audio control method process flow diagram based on 3D gesture identification provided by the invention.
Fig. 2 is the process flow diagram step by step of step S2 of the present invention.
Fig. 3 is the process flow diagram step by step of step S5 of the present invention.
Fig. 4 is the process flow diagram step by step of step S6 of the present invention.
Fig. 5 is the process flow diagram step by step of step S7 of the present invention.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are further described.
The invention provides a kind of audio control method based on 3D gesture identification, as shown in Figure 1, comprise the following steps:
S1, the electric field data obtained in gesture identification region;
Here adopt electric field strength transducer[sensor to measure gesture identified region, obtain the initial electric field data in gesture identification region, its object is to:
(1) reference is provided for setting up space 3D coordinate system subsequently in gesture identification region;
(2) dynamic changing data obtaining electric field signal is subsequently convenient to.
S2, in gesture identification region, set up space 3D coordinate system;
As shown in Figure 2, this step specifically comprises step by step following:
S21, selected a bit as true origin in gesture identification region;
In the present invention, clearly limit selected there is no of true origin position, usual true origin can be selected in the position near gesture identification regional center.
S22, determine the positive dirction of X-axis, Y-axis and Z axis, set up space 3D coordinate system.
In the embodiment of the present invention, the positive dirction back to direction as Y-axis of electric field strength transducer[sensor is set up Y-axis; Using electric field strength transducer[sensor just to the right in direction as the positive dirction of X-axis, set up X-axis perpendicular to Y-axis; Using electric field strength transducer[sensor just to the top in direction as the positive dirction of Z axis, set up Z axis perpendicular to X-axis and Y-axis place plane, set up space 3D coordinate system with this.
The position coordinates of S3, acquisition gesture identification region internal electric field region of variation;
S4, repetition step S3, obtain the dynamic changing data of electric field change regional location coordinate;
Change due to user's gesture can cut the electric field line in gesture identification region, thus cause the change of electric field signal data, therefore the position coordinates in electric field change region can react the position of user's gesture, and the physical action of user's gesture change just can be characterized by the dynamic changing data of electric field change regional location coordinate.
S5, adjust the volume of audio frequency in real time according to the dynamic changing data of X-axis coordinate;
As shown in Figure 3, this step specifically comprises step by step following:
S51, the variable quantity of setting X-axis coordinate data and the corresponding relation of volume variable quantity;
In the embodiment of the present invention, the variable quantity of X-axis coordinate data and the corresponding relation of volume variable quantity are set as: X-axis coordinate data often increases 1cm, and volume increases 1dB; X-axis coordinate data often reduces 1cm, and volume reduces 1dB.
The acquisition time interval delta T of S52, setting X-axis coordinate data
x;
In the embodiment of the present invention, the acquisition time interval delta T of X-axis coordinate data
x=0.1s.
S53, calculate each acquisition time interval delta T according to formula (1)
xthe variable quantity of interior X-axis coordinate data:
ΔX
n=X
n-X
n-1(n=1,2,3…) (1);
S54, to adjust in real time according to the volume of the corresponding relation set in step S51 to audio frequency.
Such as, if Δ X
1=5cm, then the volume of audio frequency increases 5dB;
If Δ X
2=-7cm, then the volume of audio frequency reduces 7dB.
S6, control the switching between audio frequency in real time according to the dynamic changing data of Y-axis coordinate;
As shown in Figure 4, this step specifically comprises step by step following:
The acquisition time interval delta T of S61, setting Y-axis coordinate data
y;
In the embodiment of the present invention, the acquisition time interval delta T of Y-axis coordinate data
y=0.5s.
S62, calculate each acquisition time interval delta T according to formula (2)
ythe variable quantity of interior Y-axis coordinate data:
ΔY
n=Y
n-Y
n-1(n=1,2,3…) (2);
S63, setting audio handover trigger threshold value Y
maxwith Y
min;
Wherein, Y
maxvalue is just, Y
minvalue is negative.
In the embodiment of the present invention, Audio conversion activation threshold value Y
max=20cm, Y
min=-20cm.
S64, by the variation delta Y of Y-axis coordinate data
nrespectively with Y
maxand Y
mincompare,
If Δ Y
n>=Y
max, then the next audio frequency in audio playlist is switched to;
If Δ Y
n<=Y
min, then the upper audio frequency in audio playlist is switched to;
If Y
min< Δ Y
n<Y
max, then continue to play present video.
Such as, if Δ Y
1=18cm, then continue to play present video;
If Δ Y
2=22cm, then the next audio frequency switched in audio playlist is play;
If Δ Y
3=-25cm, then the upper audio frequency switched in audio playlist is play.
S7, control broadcasting and the time-out of audio frequency in real time according to the dynamic changing data of Z axis coordinate.
As shown in Figure 5, this step specifically comprises step by step following:
Activation threshold value Z is clicked in S71, definition
m;
In the embodiment of the present invention, click activation threshold value Z
m=10cm.
Trigger condition is clicked in S72, definition: when first Z axis coordinate data reduces, and reduction exceedes and clicks activation threshold value Z
m, Z axis coordinate data increases again subsequently, and recruitment exceedes and clicks activation threshold value Z
m, be then defined as triggering and once click, note clicks times N
z=1;
Number of times determination time interval delta T is clicked in S73, setting
z;
In the embodiment of the present invention, click number of times determination time interval delta T
z=1s.
S74, basis click number of times determination time interval delta T
zinterior clicks times N
zthe broadcasting of real-time control audio frequency and time-out:
If N
z=1, then audio plays;
If N
z=2, then suspend audio frequency;
If N
z≠ 1 and N
z≠ 2, namely when clicking number of times and being other value outside 1 or 2, then keep audio frequency current state.
Audio frequency current state is kept to refer to: if audio frequency is current be in broadcast state, then to keep broadcast state; If audio frequency is current be in halted state, then keep halted state.
Those of ordinary skill in the art will appreciate that, embodiment described here is to help reader understanding's principle of the present invention, should be understood to that protection scope of the present invention is not limited to so special statement and embodiment.Those of ordinary skill in the art can make various other various concrete distortion and combination of not departing from essence of the present invention according to these technology enlightenment disclosed by the invention, and these distortion and combination are still in protection scope of the present invention.
Claims (6)
1. based on an audio control method for 3D gesture identification, it is characterized in that, comprise the following steps:
S1, the electric field data obtained in gesture identification region;
S2, in gesture identification region, set up space 3D coordinate system;
The position coordinates of S3, acquisition gesture identification region internal electric field region of variation;
S4, repetition step S3, obtain the dynamic changing data of electric field change regional location coordinate;
S5, adjust the volume of audio frequency in real time according to the dynamic changing data of X-axis coordinate;
S6, control the switching between audio frequency in real time according to the dynamic changing data of Y-axis coordinate;
S7, control broadcasting and the time-out of audio frequency in real time according to the dynamic changing data of Z axis coordinate.
2. the audio control method based on 3D gesture identification according to claim 1, is characterized in that, described step S2 specifically comprises step by step following:
S21, selected a bit as true origin in gesture identification region;
S22, determine the positive dirction of X-axis, Y-axis and Z axis, set up space 3D coordinate system.
3. the audio control method based on 3D gesture identification according to claim 1, is characterized in that, described step S5 specifically comprises step by step following:
S51, the variable quantity of setting X-axis coordinate data and the corresponding relation of volume variable quantity;
The acquisition time interval delta T of S52, setting X-axis coordinate data
x;
S53, calculate each acquisition time interval delta T according to formula (1)
xthe variable quantity of interior X-axis coordinate data:
ΔX
n=X
n-X
n-1(n=1,2,3…) (1);
S54, to adjust in real time according to the volume of the corresponding relation set in step S51 to audio frequency.
4. the audio control method based on 3D gesture identification according to claim 1, is characterized in that, described step S6 specifically comprises step by step following:
The acquisition time interval delta T of S61, setting Y-axis coordinate data
y;
S62, calculate each acquisition time interval delta T according to formula (2)
ythe variable quantity of interior Y-axis coordinate data:
ΔY
n=Y
n-Y
n-1(n=1,2,3…) (2);
S63, setting audio handover trigger threshold value Y
maxwith Y
min;
S64, by the variation delta Y of Y-axis coordinate data
nrespectively with Y
maxand Y
mincompare,
If Δ Y
n>=Y
max, then the next audio frequency in audio playlist is switched to;
If Δ Y
n<=Y
min, then the upper audio frequency in audio playlist is switched to;
If Y
min< Δ Y
n<Y
max, then continue to play present video.
5. the audio control method based on 3D gesture identification according to claim 4, is characterized in that, described Y
maxvalue is just, described Y
minvalue is negative.
6. the audio control method based on 3D gesture identification according to claim 1, is characterized in that, described step S7 specifically comprises step by step following:
Activation threshold value Z is clicked in S71, setting
m;
Trigger condition is clicked in S72, definition: when first Z axis coordinate data reduces, and reduction exceedes and clicks activation threshold value Z
m, Z axis coordinate data increases again subsequently, and recruitment exceedes and clicks activation threshold value Z
m, be then defined as triggering and once click, note clicks times N
z=1;
Number of times determination time interval delta T is clicked in S73, setting
z;
S74, basis click number of times determination time interval delta T
zinterior clicks times N
zthe broadcasting of real-time control audio frequency and time-out:
If N
z=1, then audio plays;
If N
z=2, then suspend audio frequency;
If N
z≠ 1 and N
z≠ 2, then keep audio frequency current state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510222339.0A CN104834377A (en) | 2015-05-05 | 2015-05-05 | Audio control method based on 3D (3-Dimensional) gesture recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510222339.0A CN104834377A (en) | 2015-05-05 | 2015-05-05 | Audio control method based on 3D (3-Dimensional) gesture recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104834377A true CN104834377A (en) | 2015-08-12 |
Family
ID=53812314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510222339.0A Pending CN104834377A (en) | 2015-05-05 | 2015-05-05 | Audio control method based on 3D (3-Dimensional) gesture recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104834377A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912118A (en) * | 2016-04-12 | 2016-08-31 | 童雷 | Method and device for surround sound-image control based on natural user interface |
CN108920076A (en) * | 2018-06-27 | 2018-11-30 | 清远墨墨教育科技有限公司 | A kind of operation recognition methods of user gesture and identifying system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102314269A (en) * | 2010-07-02 | 2012-01-11 | 谊达光电科技股份有限公司 | Touch panel proximity detection device and method |
CN102870078A (en) * | 2010-02-10 | 2013-01-09 | 微晶片科技德国第二公司 | System and method for contactless detection and recognition of gestures in three-dimensional space |
CN103257714A (en) * | 2013-05-31 | 2013-08-21 | 深圳职业技术学院 | All-in-one machine supporting gesture recognition |
CN103440049A (en) * | 2013-08-28 | 2013-12-11 | 深圳超多维光电子有限公司 | Input device and input method |
CN104123095A (en) * | 2014-07-24 | 2014-10-29 | 广东欧珀移动通信有限公司 | Suspension touch method and device based on vector calculation |
-
2015
- 2015-05-05 CN CN201510222339.0A patent/CN104834377A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102870078A (en) * | 2010-02-10 | 2013-01-09 | 微晶片科技德国第二公司 | System and method for contactless detection and recognition of gestures in three-dimensional space |
CN102314269A (en) * | 2010-07-02 | 2012-01-11 | 谊达光电科技股份有限公司 | Touch panel proximity detection device and method |
CN103257714A (en) * | 2013-05-31 | 2013-08-21 | 深圳职业技术学院 | All-in-one machine supporting gesture recognition |
CN103440049A (en) * | 2013-08-28 | 2013-12-11 | 深圳超多维光电子有限公司 | Input device and input method |
CN104123095A (en) * | 2014-07-24 | 2014-10-29 | 广东欧珀移动通信有限公司 | Suspension touch method and device based on vector calculation |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912118A (en) * | 2016-04-12 | 2016-08-31 | 童雷 | Method and device for surround sound-image control based on natural user interface |
CN108920076A (en) * | 2018-06-27 | 2018-11-30 | 清远墨墨教育科技有限公司 | A kind of operation recognition methods of user gesture and identifying system |
CN108920076B (en) * | 2018-06-27 | 2021-03-02 | 清远墨墨教育科技有限公司 | User gesture operation recognition method and recognition system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104410883B (en) | The mobile wearable contactless interactive system of one kind and method | |
CN103353935B (en) | A kind of 3D dynamic gesture identification method for intelligent domestic system | |
CN107632699B (en) | Natural human-machine interaction system based on the fusion of more perception datas | |
CN107765855A (en) | A kind of method and system based on gesture identification control machine people motion | |
CN106155312B (en) | Gesture recognition and control method and device | |
CN106569613A (en) | Multi-modal man-machine interaction system and control method thereof | |
US10572017B2 (en) | Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments | |
CN103529944B (en) | A kind of human motion recognition method based on Kinect | |
CN102769802A (en) | Man-machine interactive system and man-machine interactive method of smart television | |
CN103605466A (en) | Facial recognition control terminal based method | |
WO2013139181A1 (en) | User interaction system and method | |
WO2007053116A1 (en) | Virtual interface system | |
CN108509049A (en) | The method and system of typing gesture function | |
CN102662464A (en) | Gesture control method of gesture roaming control system | |
CN104834377A (en) | Audio control method based on 3D (3-Dimensional) gesture recognition | |
CN103995591B (en) | The method and apparatus producing tactile feedback | |
CN108376030B (en) | Electronic equipment control method and device and electronic equipment | |
WO2018028360A1 (en) | Control method and device for smart robot, and robot | |
CN104898880A (en) | Control method and electronic equipment | |
CN107894834B (en) | Control gesture recognition method and system in augmented reality environment | |
US20150035751A1 (en) | Interface apparatus using motion recognition, and method for controlling same | |
US20170160797A1 (en) | User-input apparatus, method and program for user-input | |
CN114613362A (en) | Device control method and apparatus, electronic device, and medium | |
CN106293485B (en) | A kind of terminal control method and device based on touch track | |
CN104391624B (en) | A kind of method of operation input and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150812 |