CN109032384A - Music control method, device and storage medium and wearable device - Google Patents
Music control method, device and storage medium and wearable device Download PDFInfo
- Publication number
- CN109032384A CN109032384A CN201811005539.0A CN201811005539A CN109032384A CN 109032384 A CN109032384 A CN 109032384A CN 201811005539 A CN201811005539 A CN 201811005539A CN 109032384 A CN109032384 A CN 109032384A
- Authority
- CN
- China
- Prior art keywords
- target
- music
- shaking
- setting range
- state information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
Abstract
The embodiment of the present application discloses a kind of music control method, device and storage medium and wearable device, which comprises obtains the vibrating state information for wearing the physical feeling of wearable device user;According to vibrating state information matches target music to be played;Control plays the target music.By using the technical solution of the application, it can be according to the vibrating state information of user body parts, such as information is shaken on head, arm shakes information or information etc. is shaken by foot, it matches corresponding target music to be played and is played, it solves in the prior art the problem of music control is cumbersome, not smart enoughization, optimizes music control operation, also the function of enriching wearable device improves user's degree of adhesion of wearable device.
Description
Technical field
The invention relates to intelligent wearable device technical field more particularly to a kind of music control methods, dress
It sets and storage medium and wearable device.
Background technique
With the development of social progress and science and technology, the size of various components is smaller and smaller, so that it can be used to integrate
In wearable device.
Wearable device currently on the market usually can have independent operating system as smart phone, can be with
Application program is run, the function in wearable device is more and more, provides convenience for the life and work of people, and people can be with
Using wearable device making and receiving calls, physiological parameter is measured, can also listen to music, watch video, play game etc..But user exists
When being listened to music using wearable device, needs to be manually entered certain a kind of song or a certain song for oneself wanting to listen, operate numerous
Trivial, intelligence is low.Therefore, the music control function in wearable device needs to optimize.
Summary of the invention
The embodiment of the present application provides a kind of music control method, device and storage medium and wearable device, can be with
Optimize music control program in the related technology.
In a first aspect, the embodiment of the present application provides a kind of music control method, comprising:
Obtain the vibrating state information for wearing the physical feeling of wearable device user;
According to vibrating state information matches target music to be played;
Control plays the target music.
In second aspect, the embodiment of the present application provides a kind of music play control apparatus, comprising:
Vibrating state data obtaining module, for obtaining the vibrating state letter for the physical feeling for wearing wearable device user
Breath;
Target music determining module, for the target music to be played according to the vibrating state information matches;
Target music playing module plays the target music for controlling.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence realizes the music control method as provided by first aspect when the program is executed by processor.
In fourth aspect, the embodiment of the present application provides a kind of wearable device, including memory, processor and is stored in
On memory and the computer program that can run on a processor, realized as provided by first aspect when the processor executes
Music control method.
The embodiment of the present application provides a kind of scheme of music control, wears wearable device user's including obtaining
The vibrating state information of physical feeling;According to vibrating state information matches target music to be played;Described in control plays
Target music.It, can be according to the vibrating state information of user body parts, such as head by using the technical solution of the application
It shakes information, arm shaking information or foot and shakes information etc., match corresponding target music to be played and simultaneously played,
It solves in the prior art the problem of music control is cumbersome, not smart enoughization, optimizes music control operation,
Also the function of enriching wearable device improves user's degree of adhesion of wearable device.
Detailed description of the invention
Fig. 1 is a kind of flow chart of music control method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another music control method provided by the embodiments of the present application;
Fig. 3 is a kind of signal sterogram of intelligent glasses provided by the embodiments of the present application;
Fig. 4 is the flow chart of another music control method provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of music play control apparatus provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of wearable device provided by the embodiments of the present application;
Fig. 7 is the structural schematic diagram of another wearable device provided by the embodiments of the present application;
Fig. 8 is a kind of signal sterogram of wearable device provided by the embodiments of the present application.
Specific embodiment
It is specifically real to the application with reference to the accompanying drawing in order to keep the purposes, technical schemes and advantages of the application clearer
Example is applied to be described in further detail.It is understood that specific embodiment described herein is used only for explaining the application,
Rather than the restriction to the application.It also should be noted that illustrating only for ease of description, in attached drawing related to the application
Part rather than full content.It should be mentioned that some exemplary realities before exemplary embodiment is discussed in greater detail
It applies example and is described as the processing or method described as flow chart.Although operations (or step) are described as sequence by flow chart
Processing, but many of these operations can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of operations
It can be rearranged.The processing can be terminated when its operations are completed, be not included in attached drawing it is also possible to have
Additional step.The processing can correspond to method, function, regulation, subroutine, subprogram etc..
Fig. 1 gives a kind of flow chart of music control method provided by the embodiments of the present application, the side of the present embodiment
Method can be executed by music play control apparatus, which can realize that described device can by way of hardware and/or software
The inside of the wearable device is set as wearable device a part.
As shown in Figure 1, music control method provided in this embodiment the following steps are included:
Step 101 obtains the vibrating state information for wearing the physical feeling of wearable device user.
Wearable device described in this implementation includes but is not limited to intelligent glasses, intelligent helmet, Intelligent glove, intelligence
Bracelet, smartwatch, intelligent ring, intelligent dress ornament or Intelligent shoe etc..
By taking intelligent glasses as an example, the structure of wearable device is simply introduced.Intelligent glasses include eyeglasses frame body and
Eyeglass.The eyeglasses frame body includes temple and frame.Optionally, it can be equipped with breath light in the inside of temple, breath light can be
LED light, and can be flashed according to the frequency that intelligent glasses wearer's head moves.Touch Zone (touching is additionally provided in temple
Control panel) and osteoacusis area.Wherein, Touch Zone is set to the outside of temple, and touch detection module is arranged in Touch Zone, is used for
Detect the touch operation of user.For example, using the touch operation of touch sensor module detection user, the touch sensor module
It is low level in initial state, is high level when there is touch operation.It is under the scene that user wears intelligent glasses that temple is close
The side of face is defined as inside, and the side of the separate face opposite with inside is defined as outside.Close to ear in temple
Osteoacusis area is arranged in position.Wherein, bone-conduction speaker or bone conduction sensor are set in osteoacusis area.In temple close to face
Heart rate sensor is arranged in temporal position, for obtaining the heart rate information for wearing intelligent glasses user.Intelligence is set on frame
Energy microphone can automatically adjust the property of microphone with intelligent recognition current environment noise level based on environmental noise
Energy.Acceleration transducer and gyroscope etc. are additionally provided on frame.In addition, being additionally provided with electroculogram (letter on frame and nose support
Referred to as EOG) sensor, for acquiring the eye state of user.In addition, being additionally provided with micro process area in temple, processor is set
It is placed in micro process area, is passed respectively with above-mentioned touch detection module, bone conduction earphone, heart rate sensor, intelligent microphone, acceleration
The electrical connection of the devices such as sensor, gyroscope, electrooculographic sensor carries out data operation, data processing for receiving pending data
And control instruction is exported to corresponding device.It should be noted that the intelligent glasses can be more by cloud downloading by internet
Media resource plays out, and can also be communicated to connect by establishing with terminal device, by obtaining multimedia resource on terminal device,
The application is to this and is not construed as limiting.
In the present embodiment, use can be currently run when application program is music playing process or detected detecting
Family for open music playing process gesture or voice when, obtain the vibrating state information, naturally it is also possible to be
The step 101 is executed under other correlation circumstances, the present embodiment is to this and is not limited.
User body parts described in the present embodiment may include head, hand (including arm, palm or finger),
Foot's (including sole or toe), waist, buttocks, chest muscle etc., position that can also be finer, such as eyes, lip, ear
Piece etc., correspondingly, if the vibrating state information of eyes is that eyes blink information.
The application scenarios that the embodiment of the present application is applicable in include but is not limited to following scene: user be sitting on chair or
It stands and is in non-athletic state, the physical feeling for only having worn wearable device is shaken, such as has worn Brilliant Eyes
The head of mirror is shaken, and the wrist portion for having worn Intelligent bracelet is shaken, and the foot for having worn Intelligent shoe is shaken,
Will be understood is in the case where wearing intelligent glasses, and the shaking of foot can also drive head to be shaken accordingly, from
And intelligent glasses can detect that as foot shake caused by head shake vibrating state information;User is in running, rope skipping
Etc. motion states, wearable device can detecte the entire body of user or some movements of parts of the body status information.In addition,
The embodiment of the present application is applicable in but is not limited to the direct vibrating state information of specific physical feeling of detection user, also available
The vibrating state information of the physical feeling generated and rubbing or beaing the vibration of physical feeling generation, for example, user uses
Finger touches or rubs his head, and head is caused to generate subtle shaking.
Wherein, vibrating state information described in the present embodiment may include shaking rhythm, shaking direction and shaking amplitude.
The shaking direction may include horizontal jitter, vertical wobble and sway forwards and backwards, the shaking direction of certain user body parts with
Horizontal direction, vertical direction and front-rear direction may be in certain shaking angle, then can will shake direction and vertical direction is small
Vertical direction is belonged in 45 ° of shaking angles, the shaking angle of direction and horizontal direction less than 45 ° will be shaken and belong to level
Direction will shake the shaking angle of direction and front-rear direction less than 45 ° and belong to front-rear direction, for shaking direction and Vertical Square
To less than 45 ° and and shaking angle of the front-rear direction again smaller than 45 ° be then believed that the shaking angle shaking direction be Vertical Square
To front-rear direction is added, determine that rule can determine the various practical shaking sides for shaking angle of user based on similar shaking direction
To.
The vibrating state information for obtaining the physical feeling for wearing wearable device user may include: to be worn by described
The motion sensor for wearing formula equipment obtains the vibrating state information;Alternatively, the bone conduction Mike for passing through the wearable device
Wind obtains the vibrating state information.
Wherein, the motion sensor includes acceleration transducer, gyroscope, and the acceleration transducer may include water
The shaking operation of user can be decomposed into shaking motion one by one by flat acceleration transducer and gravity accelerometer,
Each shaking motion is equivalent to the bat in music, can detecte the rolling of each bat by acceleration transducer and gyroscope
Dynamic rhythm, can detecte the shaking direction of each bat by gyroscope, can detecte by acceleration transducer and gyroscope
To the shaking amplitude of each bat.Illustratively, it is clapped using " X " as one, the cadence information got is " X-X_X X X-", phase
When in "~,~", wherein, line when "-" is the increasing of a upper beat, " _ " is rest, i.e. the beat is mute.
When user touches or rub his head, pulse can be vibrated through his skull, in intelligent glasses
The available vibratory impulse across skull of bone-conduction microphone, and can be converted into analog information and/or number letter
Number, and can by analog-digital converter digitaling analoging signal, processor after receiving digital signal, can to digital signal into
Row analysis generates vibrating state information.
Step 102, according to vibrating state information matches target music to be played.
Wearable device user's body is worn in (such as in the sampling period) available setting time in the present embodiment
The vibrating state information at position matches the vibrating state information with the music in default song library, if currently obtain
Vibrating state information is not enough to be matched to suitable and music, then can further obtain the vibrating state information of longer time, directly
To being matched to target music.
It is available to the higher several first sounds of matching degree after vibrating state information is matched with the music in default song library
It is happy, can be using the highest music of matching degree as target music, or it can be shown to user, it is selected by users.
Matching way in the present embodiment may include: to be matched with the song library being locally stored or set by wearable
Standby network communication module is matched with online song library.
Wherein, the shaking information may include shaking rhythm, which may include: according to the shaking rhythm, really
Determine cadence information and the target music to be played for shaking rhythm matching.It is described to shake the length that rhythm may include bat
It is short, the pause between bat.User body parts can be obtained by the acceleration transducer and gyro sensor of wearable device
Slosh frequency, and then obtain shake rhythm.
It can be preset and store in this embodiment the cadence information of per song, the cadence information can be music text
It is carried in part, is also possible to the corresponding shaking operation that wearable device had previously been made when listening the music according to user and generates
's.
Step 103, control play the target music.
Music control method provided in this embodiment, can be according to the vibrating state information of user body parts, example
Information is shaken on such as head, arm shakes information or information is shaken by foot, and the corresponding target music to be played of matching is gone forward side by side
Row plays, and solves in the prior art the problem of music control is cumbersome, not smart enoughization, optimizes music control
System operation, also enriches the function of wearable device, improves user's degree of adhesion of wearable device.
In some embodiments, the vibrating state letter for obtaining the physical feeling for wearing wearable device user is being executed
Breath;It is also wrapped before determining cadence information and the target music to be played for shaking rhythm matching according to the shaking rhythm
The step of including the cadence information of determining music.Music control method i.e. provided in this embodiment can also include following step
It is rapid: if being currently at music state, to obtain the current shaking rhythm for wearing the physical feeling of wearable device user;Root
According to the current cadence information shaken rhythm and generate currently playing music.Illustratively, the section of " X-X_X X X-" is generated
Play information.
The benefit that the step is arranged in this way is: because of everyone habit and personality difference, even if listening identical music institute
The vibrating state information showed is also not quite similar, and the present embodiment according to the user itself when listening to music by being shown
The shaking rhythm come can make to generate the cadence information that the music is directed to the user when carrying out music matching, based on use
The shaking habit at family matches the music for more meeting user demand, improves the matched accuracy of music.Further, it can incite somebody to action
The historical data of the cadence information of shaking cadence information and currently playing music of the user when listening to music is right as training sample
The training sample is trained based on machine learning method, is generated a music rhythm information and is determined model, what be will acquire
User shakes rhythm and is input to after the music rhythm determines model, and the corresponding music rhythm letter of the shaking rhythm can be obtained
Breath.It is further subsequent the music rhythm information to be determined according to user to the adjustment instruction of music rhythm information
Model is modified, and promotes the accuracy that music rhythm information determines model.
Fig. 2 gives the flow chart of another music control method provided by the embodiments of the present application.Obtaining user
The Shape of mouth of available user while the vibrating state information of physical feeling, and according to the vibrating state information and institute
Shape of mouth is stated, to match target music to be played.As shown in Fig. 2, method provided in this embodiment the following steps are included:
Step 201, the vibrating state information for obtaining the physical feeling for wearing wearable device user and the user
Shape of mouth.
It should be noted that not meaning that absolute synchronization while described in the present embodiment, and refers to and executing
Before matching the operation of target music, the vibrating state information of user body parts is not only obtained, the Shape of mouth of user is also obtained,
For example, the head for obtaining user shakes information and obtains the Shape of mouth of user, two acquisition operations can be successively or parallel
It executes, the present embodiment is to this and is not limited.
The Shape of mouth can shoot the face or oral area of people by the camera in wearable device, then by wearing
The lip-sync information of processor for wearing formula equipment is handled to obtain cadence information and/or voice messaging, or the face that will be taken
Portion or mouth images are sent to server, are handled to obtain cadence information and/or voice messaging by the lip-sync information of server,
Returning to wearable device;Intelligent glasses can also detect the vibration signal of skull caused by the shape of the mouth as one speaks by bone-conduction microphone,
Digital signal is converted by the vibration signal, the digital signal is further processed and obtains the corresponding section of the Shape of mouth
Information and/or voice messaging are played, or the digital signal is sent to server, which is obtained by server process
Intelligent glasses are returned to after corresponding cadence information and/or voice messaging.
By taking intelligent glasses as an example, illustrate in such a way that camera shoots the Shape of mouth for wearing intelligent glasses user.It takes the photograph
Picture head can be a miniature webcam, be set to the position of frame upper side frame.Fig. 3 is a kind of intelligence provided by the embodiments of the present application
The signal sterogram of glasses.As shown in figure 3, intelligent glasses frame upper side frame be equipped with lug boss 310, the lug boss 310 by
Plane A where frame extends setting length d to the direction far from frame.In the lug boss 310 with plane A where the frame
The first parallel plane 311 is equipped with front camera 6071.Wherein, in the lug boss 310 with plane A where the frame
It is less than first plane and frame at a distance from parallel the second plane (may be considered the plane with frame contact) and frame
Distance.The lug boss 310 includes rotating part 312, which extends protrusion by one end of the lug boss 310
Portion's ontology, and the center overlapping of axles of the rotating part 312 and the lug boss ontology, which can be around center axis rotation.?
The rotating part 312 is equipped with rear camera 6072.It can be rotated by controlling the rear camera 6072, shoot different angles
The target object of degree.For example, when needing to shoot eye image, by the Plane Rotation where rear camera to described first
Plane is parallel.When needing to shoot mouth shape image, control rear camera where plane along the direction of arrow rotate to institute
The first plane is stated in 45~60 angles, so as to shoot mouth shape image.
Illustratively, the mouth shape image for wearing intelligent glasses user is shot by rear camera according to the setting period, after
It sets camera and sends mouth shape image to processor.Processor identifies every width mouth shape image, determines Shape of mouth.Such as, locate
The shape of the mouth as one speaks change frequency in device statistics preset time period is managed, alternatively, processor records the amplitude of each shape of the mouth as one speaks in preset time period
Value etc..
Step 202, according to the vibrating state information and the Shape of mouth, match target music to be played.
Matching way described in this implementation includes local song storehouse matching and online song storehouse matching mode.
Illustratively, according to shaking rhythm and shape of the mouth as one speaks change frequency by song storehouse matching target sound on local library or line
It is happy.Shaking rhythm is " X-X_X X X-" and shape of the mouth as one speaks change frequency and the shaking rhythm matching, then is based on the shaking rhythm
Match target music.It such as, can be by selecting the target music with shaking rhythm matching in local library or online library.
Step 203, control play the target music.
Music control method provided in this embodiment, by being obtained while obtaining the vibrating state information of user
The Shape of mouth of user can be with to match target music jointly according to two angles of vibrating state information and Shape of mouth
Improve the matched accuracy of target music and speed.
Fig. 4 gives the flow chart of another music control method provided by the embodiments of the present application.As shown in figure 4,
Music control method provided in this embodiment the following steps are included:
Step 301 obtains the shaking rhythm of physical feeling that wearable device user is worn in the sampling period and described
Shake the shaking direction of each bat and shaking amplitude in rhythm.
The sampling period can be the pre-set time, such as 3 seconds, 5 seconds etc..
The shaking rhythm is composed of the shaking motion more clapped.It is acquired using the motion sensor of wearable device
By clapping the shaking rhythm and the corresponding shaking direction of each bat shaking motion and shaking amplitude that shaking motion forms more.Wherein,
The shaking direction includes horizontal jitter, vertical wobble and sways forwards and backwards.The shaking amplitude can shake direction for the beat
On shaking amplitude.
Step 302 shakes rhythm matching target music to be played according to described.
Matched according to the shaking rhythm with the music in local song library or online song library, find cadence information and
The highest music of the shaking rhythm matching degree is as target music.
Step 303, according to the shaking direction respectively clapped in the sampling period and shaking amplitude, determine that target plays type, it is described
Playing type includes allusion, rock and roll, folk rhyme, prevalence.
Type is played among the present embodiment and has been divided into allusion, rock and roll, folk rhyme, prevalence, it is to be appreciated that according to user's
Hobby and personal habits can also increase or reduce one or more broadcasting types.
When listening the different music for playing type, the vibrating state information showed is different user, this implementation
Shaking direction and shaking amplitude of the example by acquisition user determine that target plays type according to direction and shaking amplitude is shaken,
The details of user experience, the more deep demand being close to the users are caught.
Under normal circumstances, for rock genre music, up and down direction (vertical direction) shaking of user's head is stronger, right
In classical genre music user many places in the state of appreciation, the left and right directions (horizontal direction) of user's head is shaken strongly, and for
The dispersion degree that popular type music and folk rhyme types of music generally vertical orientation, horizontal direction and front-rear direction are shaken is smaller.
In some embodiments, the step may include: calculate separately shake in the sampling period direction be horizontal jitter,
Vertical wobble and the umber of beats swayed forwards and backwards;If the umber of beats of vertical wobble is greater than the umber of beats swayed forwards and backwards and is greater than the bat of horizontal jitter
Number, it is determined that it is rock and roll that target, which plays type,;If the umber of beats of the umber of beats of horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards
Variance is in the first preset range, it is determined that it is prevalence that target, which plays type,;If the umber of beats of the umber of beats of horizontal jitter, vertical wobble
Variance with the umber of beats swayed forwards and backwards is in the second preset range, it is determined that it is folk rhyme that target, which plays type, and described second is default
Range is less than first preset range;If the umber of beats of horizontal jitter is greater than the umber of beats swayed forwards and backwards and is greater than the bat of vertical wobble
Number, it is determined that it is allusion that target, which plays type,.Alternatively, each shaking amplitude for clapping shaking motion in the sampling period is calculated
Average value.If the average value is in the first setting range, it is determined that it is rock and roll that target, which plays type,;If the average value is
In two setting ranges, it is determined that it is prevalence that target, which plays type,;If the average value is in third setting range, it is determined that target
Broadcasting type is folk rhyme;If the average value is in the 4th setting range, it is determined that it is allusion that target, which plays type,;Wherein, institute
The first setting range is stated greater than second setting range, second setting range is greater than the third setting range, described
Third setting range is greater than the 4th setting range.Alternatively, if the umber of beats of vertical wobble is greater than the umber of beats swayed forwards and backwards
And it is greater than the umber of beats of horizontal jitter, and each average value for clapping shaking amplitude is in the first setting range, it is determined that target plays
Type is rock and roll;If the variance of the umber of beats of the umber of beats of horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards is in the first default model
In enclosing, and each average value for clapping shaking amplitude is in the second setting range, it is determined that it is prevalence that target, which plays type,;If horizontal
The variance of the umber of beats of shaking, the umber of beats of vertical wobble and the umber of beats swayed forwards and backwards is in the second preset range, and or each bat rolling
The average value of dynamic amplitude is in third setting range, it is determined that it is folk rhyme that target, which plays type,;If the umber of beats of horizontal jitter is greater than
The umber of beats that sways forwards and backwards and the umber of beats for being greater than vertical wobble, and each average value for clapping shaking amplitude is in the 4th setting range,
Then determining that target plays type is allusion, and first setting range is greater than second setting range, the second setting model
It encloses and is greater than the third setting range, the third setting range is greater than the 4th setting range.
Under normal circumstances, for the music of rock genre, every shaking direction for clapping corresponding user's head and shaking width
Degree is variation, and change frequency is higher, for the change frequency of the music of popular type, takes second place compared with rock genre, for the people
The change frequency of the music of ballad type is less than the change frequency of popular type, and the change frequency of the music of classical genre be compared with
Low.
In some embodiments, the step may include: obtain the sampling period in respectively clap between shake direction on
First change frequency and the second change frequency on shaking amplitude;If first change frequency and the second change frequency are
In five setting ranges, it is determined that it is rock and roll that target, which plays type,;If first change frequency and the second change frequency are the 6th
In setting range, it is determined that it is prevalence that target, which plays type,;If first change frequency and the second change frequency are set the 7th
Determine in range, it is determined that it is folk rhyme that target, which plays type,;If first change frequency and the second change frequency are in the 8th setting
In range, it is determined that it is allusion that target, which plays type,;Wherein, the 5th setting range is greater than the 6th setting range, institute
The 6th setting range is stated greater than the 7th setting range, the 7th setting range is greater than the 8th setting range.Due to
The shaking motion of user's head is not repetitive operation in the same direction in each bat, comprehensively consider shake direction change rate with
And the change rate of shaking amplitude, shaking motion can be more fully annotated, then type is played based on shaking motion matching target,
Matching accuracy can be improved.
Wherein, first change frequency and the second change frequency can be the first variation frequency in the 5th setting range
Rate and the second change frequency respectively all in the 5th setting range, be also possible to the first change frequency and the second change frequency plus
The sum of power is in the 5th setting range.Correspondingly, first change frequency and the second change frequency be in the 6th setting range,
First change frequency and the second change frequency are in the 7th setting range, and the first change frequency and the second change frequency are
It is similar in eight setting ranges.
Alternatively, in some embodiments, which may include: to shake between respectively clapping in the acquisition sampling period
The first change frequency on direction.If first change frequency is in the 5th setting range, it is determined that target plays type and is
Rock and roll.If first change rate is in the 6th setting range, it is determined that it is prevalence that target, which plays type,.If first variation
Rate is in the 7th setting range, it is determined that it is folk rhyme that target, which plays type,.If first change rate in the 8th setting range,
Then determining that target plays type is allusion.Wherein, the 5th setting range, the 6th setting range, the 7th setting range and the 8th set
The relationship of range is determined as described in above-mentioned example, and details are not described herein again.Design reduces processor data volume to be treated in this way,
Improve the matching speed that target plays type.
Alternatively, in some embodiments, which may include: to shake between respectively clapping in the acquisition sampling period
The second change frequency in amplitude.If second change frequency is in the 5th setting range, it is determined that target plays type and is
Rock and roll.If second change rate is in the 6th setting range, it is determined that it is prevalence that target, which plays type,.If second variation
Rate is in the 7th setting range, it is determined that it is folk rhyme that target, which plays type,.If second change rate in the 8th setting range,
Then determining that target plays type is allusion.Wherein, the 5th setting range, the 6th setting range, the 7th setting range and the 8th set
The relationship of range is determined as described in above-mentioned example, and details are not described herein again.Design can also reduce processor number to be treated in this way
According to amount, the matching speed that target plays type is improved.
Step 304 plays the type broadcasting target music according to the target.
Music control method provided in this embodiment, by obtaining the shaking rhythm of user, shaking direction and shaking
Frequency matches target music to be played according to the shaking rhythm, and determines mesh according to direction and slosh frequency is shaken
The happy broadcasting type of mark with phonetic symbols, the individual demand for making the broadcasting control of music more be close to the users, improves music control
Interest.
Fig. 5 is a kind of structural schematic diagram of music play control apparatus provided by the embodiments of the present application, which can be by soft
Part and/or hardware realization, are integrated in wearable device.As shown in figure 5, the device includes vibrating state data obtaining module
41, target music determining module 42 and target music playing module 43.
The vibrating state data obtaining module 41, for obtaining the shaking for wearing the physical feeling of wearable device user
Status information;
The target music determining module 42, for the target music to be played according to the vibrating state information matches;
The target music playing module 43 plays the target music for controlling.
Device provided in this embodiment can shake letter according to the vibrating state information of user body parts, such as head
Breath, arm shake information or information etc. is shaken by foot, match corresponding target music to be played and are played, are solved
The problem of music control is cumbersome in the prior art, not smart enoughization optimizes music control operation, also enriches
The function of wearable device, improves user's degree of adhesion of wearable device.
Optionally, the vibrating state information includes shaking rhythm, and the target music determining module is used for:
According to the shaking rhythm, cadence information and the target music to be played for shaking rhythm matching are determined.
Optionally, described device further includes music rhythm information determination module, is specifically used for:
If being currently at music state, the current shaking section for the physical feeling for wearing wearable device user is obtained
It plays;
According to the current cadence information shaken rhythm and generate currently playing music.
Optionally, the vibrating state data obtaining module is specifically used for:
By the motion sensor of the wearable device, the vibrating state information is obtained;Alternatively,
By the bone-conduction microphone of the wearable device, the vibrating state information is obtained.
Optionally, the vibrating state data obtaining module is also used in the vibrating state information for obtaining user body parts
While obtain the Shape of mouth of user, the target music determining module is specifically used for:
According to the vibrating state information and the Shape of mouth, target music to be played is matched.
Optionally, the vibrating state information includes shaking direction and shaking amplitude, and the shaking direction includes horizontal shakes
It dynamic, vertical wobble and sways forwards and backwards, described device further include:
It shakes direction amplitude and obtains module, for obtaining each shaking direction for clapping shaking motion and shaking in the sampling period
Amplitude;
Target plays determination type module, for determining according to the shaking direction respectively clapped in the sampling period and shaking amplitude
Target plays type, and the broadcasting type includes allusion, rock and roll, folk rhyme, prevalence;
Second target music playing module plays the target music for playing type according to the target.
Optionally, the target plays determination type module and is specifically used for:
Calculating separately and shaking direction in the sampling period is horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards;
If the umber of beats of vertical wobble is greater than the umber of beats swayed forwards and backwards and is greater than the umber of beats of horizontal jitter, and/or each clap is shaken
The average value of amplitude is in the first setting range, it is determined that it is rock and roll that target, which plays type,;
If the variance of the umber of beats of the umber of beats of horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards is in the first preset range
It is interior, and/or each average value for clapping shaking amplitude is in the second setting range, it is determined that it is prevalence that target, which plays type,;
If the variance of the umber of beats of the umber of beats of horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards is in the second preset range
It is interior, and/or each average value for clapping shaking amplitude is in third setting range, it is determined that it is folk rhyme that target, which plays type, described the
Two preset ranges are less than first preset range;
If the umber of beats of horizontal jitter is greater than the umber of beats swayed forwards and backwards and is greater than the umber of beats of vertical wobble, and/or each clap is shaken
The average value of amplitude is in the 4th setting range, it is determined that it is allusion that target, which plays type,;
Wherein, first setting range is greater than second setting range, and second setting range is greater than described the
Three setting ranges, the third setting range are greater than the 4th setting range.
Optionally, the target plays determination type module and is specifically used for:
It obtains and is shaking the first change frequency on direction and second on shaking amplitude between respectively clapping in the sampling period
Change frequency;
If first change frequency and/or the second change frequency are in the 5th setting range, it is determined that target broadcast message class
Type is rock and roll;
If first change frequency and/or the second change frequency are in the 6th setting range, it is determined that target broadcast message class
Type is prevalence;
If first change frequency and/or the second change frequency are in the 7th setting range, it is determined that target broadcast message class
Type is folk rhyme;
If first change frequency and/or the second change frequency are in the 8th setting range, it is determined that target broadcast message class
Type is allusion;
Wherein, the 5th setting range is greater than the 6th setting range, and the 6th setting range is greater than described the
Seven setting ranges, the 7th setting range are greater than the 8th setting range.
The embodiment of the present application also provides a kind of storage medium comprising computer executable instructions, and the computer is executable
Instruction, for executing a kind of music control method, is worn when being executed by computer processor this method comprises: obtaining and wearing
Wear the vibrating state information of the physical feeling of formula equipment user;According to vibrating state information matches target sound to be played
It is happy;Control plays the target music.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap
It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as
DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium
(such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other
Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed,
Or can be located in different second computer systems, second computer system is connected to the by network (such as internet)
One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term
Matter " may include may reside in different location (such as by network connection different computer systems in) two or
More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement
For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application
The music that executable instruction is not limited to the described above controls operation, can also be performed provided by the application any embodiment
Relevant operation in music control method.
The embodiment of the present application provides a kind of wearable device, and the embodiment of the present application offer can be integrated in the wearable device
Music play control apparatus.Fig. 6 is a kind of structural schematic diagram of wearable device provided by the embodiments of the present application.It is wearable to set
Standby 500 may include: memory 510, for storing executable program code;The processor 520 is by reading the storage
The executable program code stored in device 510 runs program corresponding with the executable program code, for executing: obtaining
Take the vibrating state information for wearing the physical feeling of wearable device user;It is to be played according to the vibrating state information matches
Target music;Control plays the target music.
The memory and processor enumerated in above-mentioned example are some components of wearable device, described wearable to set
Standby can also include other components.Fig. 7 is the structural block diagram of another wearable device provided by the embodiments of the present application, Fig. 8
For a kind of signal sterogram of wearable device provided by the embodiments of the present application.As shown in Figure 7 and Figure 8, which can
To include: memory 601, processor (Central Processing Unit, CPU) 602 (hereinafter referred to as CPU), display unit
603, touch panel 604, heart rate detection module 605, range sensor 606, (including 6071 He of front camera of camera 607
Rear camera 6072), bone-conduction speaker 608, microphone 609 (such as can be bone-conduction microphone), breath light 610 and fortune
Dynamic sensor 612.These components by one or more communication bus or signal wire 611 (being also known as inner transmission lines below) come
Communication.
It should be understood that diagram wearable device is only an example, and wearable device can have than figure
Shown in more or less component, two or more components can be combined, or can have different portions
Part configuration.Various parts shown in the drawings can be including one or more signal processings and/or specific integrated circuit
Hardware, software or hardware and software combination in realize.
Just the wearable device provided in this embodiment for being integrated with music play control apparatus is described in detail below,
The wearable device is by taking intelligent glasses as an example.
Memory 601, the memory 601 can be accessed by CPU602, and the memory 601 may include that high speed is random
Access memory, can also include nonvolatile memory, for example, one or more disk memory, flush memory device or its
His volatile solid-state part.
Display unit 603, can be used for the operation and control interface of display image data and operating system, and display unit 603 is embedded in
In the frame of intelligent glasses, frame is internally provided with inner transmission lines 611, the inner transmission lines 611 and display unit
603 connections.
The outside of at least one intelligent glasses temple is arranged in touch panel 604, the touch panel 604, for obtaining touching
Data are touched, touch panel 604 is connected by inner transmission lines 611 with CPU602.Wherein, touch panel 604 can detect user
Finger sliding, clicking operation, and accordingly the data detected be transmitted to processor 602 handled it is corresponding to generate
Control instruction, illustratively, can be left shift instruction, right shift instruction, move up instruction, move down instruction etc..Illustratively, display unit
Part 603 can video-stream processor 602 transmit virtual image data, which can be accordingly according to touch panel 604
The user's operation that detects carries out corresponding change, specifically, can be carry out screen switching, when detecting left shift instruction or move to right
Switch upper one or next virtual image picture after instruction accordingly;It, should when display unit 603 shows video playing information
Left shift instruction, which can be, plays out playbacking for content, and right shift instruction can be the F.F. for playing out content;Work as display unit
603 displays are when being editable word content, and the left shift instruction, right shift instruction move up instruction, move down instruction and can be to cursor
Displacement operation, i.e. the position of cursor can move the touch operation of touch tablet according to user;When display unit 603 is aobvious
When the content shown is game animation picture, the left shift instruction, right shift instruction move up instruction, move down instruction and can be in game
Object controlled, in machine game like flying, can by the left shift instruction, right shift instruction, move up instruction, move down instruction control respectively
The heading of aircraft processed;When display unit 603 can show the video pictures of different channel, the left shift instruction, right shift instruction,
Move up instruction, move down instruction and can carry out the switching of different channel, wherein move up instruction and move down instruction can be to switch to it is preset
Channel (the common channel that such as user uses);When display unit 603 show static images when, the left shift instruction, right shift instruction, on
It moves instruction, move down the switching that instructs and can carry out between different pictures, wherein left shift instruction can be to switch to a width picture,
Right shift instruction, which can be, switches to next width figure, and an atlas can be to switch to by moving up instruction, and moving down instruction can be switching
To next atlas.The touch panel 604 can also be used to control the display switch of display unit 603, illustratively, work as length
When pressing 604 touch area of touch panel, display unit 603, which is powered, shows graphic interface, presses touch panel 604 when long again
When touch area, display unit 603 power off, when display unit 603 be powered after, can by touch panel 604 carry out upper cunning and
Operation glide to adjust the brightness or resolution ratio that show image in display unit 603.
Heart rate detection mould group 605, for measuring the heart rate data of user, heart rate refers to beats per minute, the heart rate
Mould group 605 is detected to be arranged on the inside of temple.Specifically, the heart rate detection mould group 605 can be in such a way that electric pulse measures
Human body electrocardio data are obtained using stemness electrode, heart rate size is determined according to the amplitude peak in electrocardiogram (ECG) data;The heart rate detection
Mould group 605 can also be by being formed using the light transmitting and light receiver of photoelectric method measurement heart rate, correspondingly, the heart rate is examined
Mould group 605 is surveyed to be arranged at temple bottom, the ear-lobe of human body auricle.Heart rate detection mould group 605 can phase after collecting heart rate data
The progress data processing in processor 602 that is sent to answered has obtained the current heart rate value of wearer, in one embodiment, processing
Device 602, can be by the heart rate value real-time display in display unit 603 after determining the heart rate value of user, optional processor
602 are determining that heart rate value lower (such as less than 50) or higher (such as larger than 100) can trigger alarm accordingly, while by the heart
Rate value and/or the warning message of generation are sent to server by communication module.
Range sensor 606, may be provided on frame, the distance which is used to incude face to frame,
The realization of infrared induction principle can be used in the range sensor 606.Specifically, the range sensor 606 is by the range data of acquisition
It is sent to processor 602, data control the bright dark of display unit 603 to processor 602 according to this distance.Illustratively, work as determination
When the collected distance of range sensor 606 is less than 5 centimetres out, the corresponding control display unit 603 of processor 602, which is in, to be lighted
State, when determine range sensor be detected with object close to when, it is corresponding control display unit 604 and be in close shape
State.
Breath light 610 may be provided at the edge of frame, when display unit 603 closes display picture, the breath light 610
It can be lighted according to the control of processor 602 in the bright dark effect of gradual change.
Camera 607 can be the position that the upper side frame of frame is arranged in, and acquire the proactive of the image data in front of user
As module 6071, can also acquire user eyeball information rear photographing module 6072 (when due to shooting glasses, rear camera
6072 place planes, so can not see rear camera 6072 in Ben Tunei, are represented by dotted lines towards glasses), it is also possible to
The combination of the two.The postposition photographing module 6072 can rotate, so that control rear camera 6072 rotates according to actual needs
Setting collection angle can also acquire user's Shape of mouth.Specifically, when camera 607 acquires forward image, by the image of acquisition
It is sent to the identification of processor 602, processing, and corresponding trigger event is triggered according to recognition result.Illustratively, when user is in
When middle wearing wearable device, by being identified to the forward image of acquisition, if recognizing article of furniture, look into accordingly
It askes and whether there is corresponding control event, if it is present accordingly showing the corresponding control interface of control event aobvious
Show in component 603, user can carry out the control of corresponding article of furniture by touch panel 604, wherein the article of furniture and intelligence
Energy glasses are connected to the network by bluetooth or wireless self-networking;It, can be corresponding when user wears the wearable device outdoors
Target identification mode is opened, which can be used to identify specific people, and the image of acquisition is sent to by camera 607
Processor 602 carries out recognition of face processing, accordingly can be integrated by intelligent glasses if recognizing the default face of setting
Loudspeaker carry out sound casting, which can be also used for identifying different plants, for example, processor 602
The present image that is acquired according to the touch operation of touch panel 604 with recording camera 607 is simultaneously sent to service by communication module
To be identified, server is identified and is fed back relevant botanical name to the plant in acquisition image, introduced to intelligence device
Glasses, and feedback data is shown in display unit 603.Camera 607 can also be for acquiring user's eye such as eyeball
Image, different control instruction is generated by the identification of the rotation to eyeball, illustratively, as eyeball is rotated up in generation
Control instruction is moved, eyeball rotates down generation and moves down control instruction, and the eyeball generation that turns left moves to left control instruction, and eyeball is to the right
Rotation generates and moves to right control instruction, wherein qualified, display unit 603 can the virtual image data that transmits of video-stream processor 602, should
Control instruction that the mobile variation of the user eyeball that virtual image data can be detected according to camera 607 accordingly generates and change
Become, specifically, can be carry out screen switching, when detect move to left control instruction or move to right control instruction after switching accordingly on
One or next virtual image picture;When display unit 603 shows video playing information, this moves to left control instruction and can be
Playbacking for content is played out, moving to right control instruction can be the F.F. for playing out content;When the display of display unit 603 is
When editable word content, this moves to left control instruction, moves to right control instruction, moves up control instruction, moving down control instruction and can be
To the displacement operation of cursor, i.e. the position of cursor can move the touch operation of touch tablet according to user;Work as display unit
When the content that part 603 is shown is game animation picture, this moves to left control instruction, moves to right control instruction, move up control instruction, move down
Control instruction, which can be, controls the object in game, in machine game like flying, control instruction can be moved to left by this, moves to right control
System instruction moves up control instruction, moves down the heading that control instruction controls aircraft respectively;When display unit 603 can be shown not
When co-channel video pictures, this move to left control instruction, move to right control instruction, move up control instruction, move down control instruction can be into
The switching of row different channel, wherein pre-set channel can be to switch to (such as user makes by moving up control instruction and moving down control instruction
Common channel);When display unit 603 shows static images, this moves to left control instruction, moves to right control instruction, moves up control
System instruction moves down control instruction and can carry out switching between different pictures, wherein one can be to switch to by moving to left control instruction
Width picture, moves to right control instruction and can be and switch to next width figure, and an atlas can be to switch to by moving up control instruction, be moved down
Control instruction, which can be, switches to next atlas.
The inner wall side of at least one temple is arranged in bone-conduction speaker 608, bone-conduction speaker 608, for that will receive
To processor 602 send audio signal be converted to vibration signal.Wherein, sound is passed through skull by bone-conduction speaker 608
It is transferred to human body inner ear, is transmitted in skull cochlea by the way that the electric signal of audio is changed into vibration signal, then by auditory nerve
It is perceived.Reduce hardware configuration thickness as sounding device by bone-conduction speaker 608, weight is lighter, while without electromagnetism
Radiation will not be influenced by electromagnetic radiation, and have antinoise, waterproof and liberation ears a little.
Microphone 609, may be provided on the lower frame of frame, for acquiring external (user, environment) sound and being transmitted to
Processor 602 is handled.Illustratively, the sound that microphone 609 issues user be acquired and pass through processor 602 into
Row Application on Voiceprint Recognition can receive subsequent voice control, specifically, user if being identified as the vocal print of certification user accordingly
Collected voice is sent to processor 602 and identified according to recognition result generation pair by capable of emitting voice, microphone 609
The control instruction answered, such as " booting ", " shutdown ", " promoting display brightness ", " reducing display brightness ", the subsequent basis of processor 602
The control instruction of the generation executes corresponding control processing.
Motion sensor 612, including range sensor, acceleration transducer, gyroscope etc., can detecte physical feeling
Vibrating state information, such as shake rhythm, shake direction and shaking amplitude etc..
Wearable device provided by the embodiments of the present application, can according to the vibrating state information of user body parts, such as
Information is shaken on head, arm shakes information or information etc. is shaken by foot, is matched corresponding target music to be played and is carried out
It plays, solves in the prior art the problem of music control is cumbersome, not smart enoughization, optimize music control
Operation, also enriches the function of wearable device, improves user's degree of adhesion of wearable device.
Music play control apparatus, storage medium and the wearable device provided in above-described embodiment can be performed the application and appoint
Music control method provided by embodiment of anticipating has and executes the corresponding functional module of this method and beneficial effect.Do not exist
The technical detail of detailed description in above-described embodiment, reference can be made to music controlling party provided by the application any embodiment
Method.
The technical principle that above are only the preferred embodiment of the application and used.The application is not limited to spy described here
Determine embodiment, various significant changes, readjustment and the substitution being able to carry out for a person skilled in the art are not departing from
The protection scope of the application.Therefore, although being described in further detail by above embodiments to the application, this Shen
Above embodiments please be not limited only to, can also include other more equivalence enforcements in the case where not departing from the application design
Example, and scope of the present application is determined by the scope of the claims.
Claims (13)
1. a kind of music control method characterized by comprising
Obtain the vibrating state information for wearing the physical feeling of wearable device user;
According to vibrating state information matches target music to be played;
Control plays the target music.
2. the method according to claim 1, wherein the vibrating state information include shake rhythm, described
Include: according to vibrating state information matches target music to be played
According to the shaking rhythm, cadence information and the target music to be played for shaking rhythm matching are determined.
3. according to the method described in claim 2, it is characterized by further comprising:
If being currently at music state, the current shaking rhythm for wearing the physical feeling of wearable device user is obtained;
According to the current cadence information shaken rhythm and generate currently playing music.
4. the method according to claim 1, wherein described obtain the physical feeling for wearing wearable device user
Vibrating state information include:
By the motion sensor of the wearable device, the vibrating state information is obtained;Alternatively,
By the bone-conduction microphone of the wearable device, the vibrating state information is obtained.
5. the method according to claim 1, wherein obtaining the same of the vibrating state information of user body parts
When obtain the Shape of mouth of user, the target music to be played according to the vibrating state information matches include:
According to the vibrating state information and the Shape of mouth, target music to be played is matched.
6. method according to claim 1-5, which is characterized in that the vibrating state information includes shaking direction
And shaking amplitude, the shaking direction include horizontal jitter, vertical wobble and sway forwards and backwards, the method also includes:
Obtain each shaking direction for clapping shaking motion and shaking amplitude in the sampling period;
According to the shaking direction respectively clapped in the sampling period and shaking amplitude, determine that target plays type, the broadcasting type includes
Allusion, rock and roll, folk rhyme, prevalence;
Type, which is played, according to the target plays the target music.
7. according to the method described in claim 6, it is characterized in that, described according to the shaking direction respectively clapped in the sampling period and rolling
Dynamic amplitude determines that target plays type and includes:
Calculating separately and shaking direction in the sampling period is horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards;
If the umber of beats of vertical wobble is greater than the umber of beats swayed forwards and backwards and is greater than the umber of beats of horizontal jitter, it is determined that target plays type
For rock and roll;
If the variance of the umber of beats of the umber of beats of horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards in the first preset range,
Determine that target plays type as prevalence;
If the variance of the umber of beats of the umber of beats of horizontal jitter, vertical wobble and the umber of beats swayed forwards and backwards in the second preset range,
Determining that target plays type is folk rhyme, and second preset range is less than first preset range;
If the umber of beats of horizontal jitter is greater than the umber of beats swayed forwards and backwards and is greater than the umber of beats of vertical wobble, it is determined that target plays type
For allusion.
8. according to the method described in claim 6, it is characterized in that, described according to the shaking direction respectively clapped in the sampling period and rolling
Dynamic amplitude determines that target plays type and includes:
Calculate the average value of each shaking amplitude for clapping shaking motion in the sampling period;
If the average value is in the first setting range, it is determined that it is rock and roll that target, which plays type,;
If the average value is in the second setting range, it is determined that it is prevalence that target, which plays type,;
If the average value is in third setting range, it is determined that it is folk rhyme that target, which plays type,;
If the average value is in the 4th setting range, it is determined that it is allusion that target, which plays type,;
Wherein, first setting range is greater than the second setting range, and second setting range is greater than the third and sets model
It encloses, the third setting range is greater than the 4th setting range.
9. according to the method described in claim 6, it is characterized in that, described according to the shaking direction respectively clapped in the sampling period and rolling
Dynamic amplitude determines that target plays type and includes:
It obtains and is shaking the first change frequency on direction and the second variation on shaking amplitude between respectively clapping in the sampling period
Frequency;
If first change frequency and the second change frequency are in the 5th setting range, it is determined that it is to shake that target, which plays type,
Rolling;
If first change frequency and the second change frequency are in the 6th setting range, it is determined that it is stream that target, which plays type,
Row;
If first change frequency and the second change frequency are in the 7th setting range, it is determined that it is the people that target, which plays type,
Ballad;
If first change frequency and the second change frequency are in the 8th setting range, it is determined that it is Gu that target, which plays type,
Allusion quotation;
Wherein, the 5th setting range is greater than the 6th setting range, and the 6th setting range is greater than the described 7th and sets
Determine range, the 7th setting range is greater than the 8th setting range.
10. according to the method described in claim 6, it is characterized in that, the wearable device includes intelligent glasses.
11. a kind of music play control apparatus characterized by comprising
Vibrating state data obtaining module, for obtaining the vibrating state information for wearing the physical feeling of wearable device user;
Target music determining module, for the target music to be played according to the vibrating state information matches;
Target music playing module plays the target music for controlling.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The music control method as described in any in claim 1-10 is realized when execution.
13. a kind of wearable device including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, which is characterized in that the processor is realized when executing the computer program such as institute any in claim 1-10
The music control method stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811005539.0A CN109032384B (en) | 2018-08-30 | 2018-08-30 | Music playing control method and device, storage medium and wearable device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811005539.0A CN109032384B (en) | 2018-08-30 | 2018-08-30 | Music playing control method and device, storage medium and wearable device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109032384A true CN109032384A (en) | 2018-12-18 |
CN109032384B CN109032384B (en) | 2021-09-28 |
Family
ID=64625923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811005539.0A Active CN109032384B (en) | 2018-08-30 | 2018-08-30 | Music playing control method and device, storage medium and wearable device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109032384B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110083329A (en) * | 2019-04-24 | 2019-08-02 | 深圳传音通讯有限公司 | Terminal adjusting method and terminal |
CN110232911A (en) * | 2019-06-13 | 2019-09-13 | 南京地平线集成电路有限公司 | With singing recognition methods, device, storage medium and electronic equipment |
CN111752388A (en) * | 2020-06-19 | 2020-10-09 | 深圳振科智能科技有限公司 | Application control method, device, equipment and storage medium |
CN111984818A (en) * | 2019-05-23 | 2020-11-24 | 北京地平线机器人技术研发有限公司 | Singing following recognition method and device, storage medium and electronic equipment |
WO2021044219A3 (en) * | 2019-07-13 | 2021-06-03 | Solos Technology Limited | Hardware architecture for modularized eyewear systems apparatuses, and methods |
CN113160848A (en) * | 2021-05-07 | 2021-07-23 | 网易(杭州)网络有限公司 | Dance animation generation method, dance animation model training method, dance animation generation device, dance animation model training device, dance animation equipment and storage medium |
CN113867524A (en) * | 2021-09-10 | 2021-12-31 | 安克创新科技股份有限公司 | Control method and device and intelligent audio glasses |
CN114153308A (en) * | 2020-09-08 | 2022-03-08 | 阿里巴巴集团控股有限公司 | Gesture control method and device, electronic equipment and computer readable medium |
CN114816199A (en) * | 2022-04-29 | 2022-07-29 | 西安歌尔泰克电子科技有限公司 | Control method of wearable device, wearable device and computer storage medium |
CN115185085A (en) * | 2022-07-18 | 2022-10-14 | 佛山理成科技有限公司 | Intelligent glasses and control method thereof |
WO2023056668A1 (en) * | 2021-10-09 | 2023-04-13 | 深圳市深科创投科技有限公司 | Smart audio glasses having acceleration sensor |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101394906A (en) * | 2006-01-24 | 2009-03-25 | 索尼株式会社 | Audio reproducing device, audio reproducing method, and audio reproducing program |
US20150162047A1 (en) * | 2013-12-10 | 2015-06-11 | Joseph J. Lacirignola | Methods and apparatus for recording impulsive sounds |
CN104867506A (en) * | 2015-04-08 | 2015-08-26 | 小米科技有限责任公司 | Music automatic control method and device |
CN108415764A (en) * | 2018-02-13 | 2018-08-17 | 广东欧珀移动通信有限公司 | Electronic device, game background music matching process and Related product |
-
2018
- 2018-08-30 CN CN201811005539.0A patent/CN109032384B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101394906A (en) * | 2006-01-24 | 2009-03-25 | 索尼株式会社 | Audio reproducing device, audio reproducing method, and audio reproducing program |
US20150162047A1 (en) * | 2013-12-10 | 2015-06-11 | Joseph J. Lacirignola | Methods and apparatus for recording impulsive sounds |
CN104867506A (en) * | 2015-04-08 | 2015-08-26 | 小米科技有限责任公司 | Music automatic control method and device |
CN108415764A (en) * | 2018-02-13 | 2018-08-17 | 广东欧珀移动通信有限公司 | Electronic device, game background music matching process and Related product |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110083329A (en) * | 2019-04-24 | 2019-08-02 | 深圳传音通讯有限公司 | Terminal adjusting method and terminal |
CN111984818A (en) * | 2019-05-23 | 2020-11-24 | 北京地平线机器人技术研发有限公司 | Singing following recognition method and device, storage medium and electronic equipment |
CN110232911A (en) * | 2019-06-13 | 2019-09-13 | 南京地平线集成电路有限公司 | With singing recognition methods, device, storage medium and electronic equipment |
GB2600562A (en) * | 2019-07-13 | 2022-05-04 | Solos Tech Limited | Hardware architecture for modularized eyewear systems apparatuses, and methods |
WO2021044219A3 (en) * | 2019-07-13 | 2021-06-03 | Solos Technology Limited | Hardware architecture for modularized eyewear systems apparatuses, and methods |
JP2022543738A (en) * | 2019-07-13 | 2022-10-14 | ソロズ・テクノロジー・リミテッド | Hardware architecture for modular eyewear systems, devices and methods |
GB2600562B (en) * | 2019-07-13 | 2023-09-20 | Solos Tech Limited | Hardware architecture for modularized eyewear systems apparatuses, and methods |
JP7352004B2 (en) | 2019-07-13 | 2023-09-27 | ソロズ・テクノロジー・リミテッド | Hardware architecture for modularized eyeglass systems, devices, and methods |
CN111752388A (en) * | 2020-06-19 | 2020-10-09 | 深圳振科智能科技有限公司 | Application control method, device, equipment and storage medium |
CN114153308A (en) * | 2020-09-08 | 2022-03-08 | 阿里巴巴集团控股有限公司 | Gesture control method and device, electronic equipment and computer readable medium |
CN114153308B (en) * | 2020-09-08 | 2023-11-21 | 阿里巴巴集团控股有限公司 | Gesture control method, gesture control device, electronic equipment and computer readable medium |
CN113160848A (en) * | 2021-05-07 | 2021-07-23 | 网易(杭州)网络有限公司 | Dance animation generation method, dance animation model training method, dance animation generation device, dance animation model training device, dance animation equipment and storage medium |
CN113867524A (en) * | 2021-09-10 | 2021-12-31 | 安克创新科技股份有限公司 | Control method and device and intelligent audio glasses |
WO2023056668A1 (en) * | 2021-10-09 | 2023-04-13 | 深圳市深科创投科技有限公司 | Smart audio glasses having acceleration sensor |
CN114816199A (en) * | 2022-04-29 | 2022-07-29 | 西安歌尔泰克电子科技有限公司 | Control method of wearable device, wearable device and computer storage medium |
CN115185085A (en) * | 2022-07-18 | 2022-10-14 | 佛山理成科技有限公司 | Intelligent glasses and control method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN109032384B (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109032384A (en) | Music control method, device and storage medium and wearable device | |
US11450073B1 (en) | Multi-user virtual and augmented reality tracking systems | |
US10216474B2 (en) | Variable computing engine for interactive media based upon user biometrics | |
US11205408B2 (en) | Method and system for musical communication | |
CN109120790B (en) | Call control method and device, storage medium and wearable device | |
CN111936036A (en) | Guiding live entertainment using biometric sensor data to detect neurological state | |
US20180124497A1 (en) | Augmented Reality Sharing for Wearable Devices | |
US20180123813A1 (en) | Augmented Reality Conferencing System and Method | |
CN109119057A (en) | Musical composition method, apparatus and storage medium and wearable device | |
US11247021B2 (en) | Craniaofacial emotional response influencer for audio and visual media | |
JP2016126500A (en) | Wearable terminal device and program | |
CN109259724B (en) | Eye monitoring method and device, storage medium and wearable device | |
JP2020039029A (en) | Video distribution system, video distribution method, and video distribution program | |
CN109040462A (en) | Stroke reminding method, apparatus, storage medium and wearable device | |
CN109068126B (en) | Video playing method and device, storage medium and wearable device | |
JP7207468B2 (en) | Output control device, output control method and program | |
KR20150141793A (en) | Wireless receiver and method for controlling the same | |
JP7198244B2 (en) | Video distribution system, video distribution method, and video distribution program | |
KR20150067922A (en) | A rhythm game device interworking user behavior | |
US11941177B2 (en) | Information processing device and information processing terminal | |
CN109257490A (en) | Audio-frequency processing method, device, wearable device and storage medium | |
CN109144263A (en) | Social householder method, device, storage medium and wearable device | |
US20230009322A1 (en) | Information processing device, information processing terminal, and program | |
CN109101101B (en) | Wearable device control method and device, storage medium and wearable device | |
CN109361727B (en) | Information sharing method and device, storage medium and wearable device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |