CN109032384B - Music playing control method and device, storage medium and wearable device - Google Patents

Music playing control method and device, storage medium and wearable device Download PDF

Info

Publication number
CN109032384B
CN109032384B CN201811005539.0A CN201811005539A CN109032384B CN 109032384 B CN109032384 B CN 109032384B CN 201811005539 A CN201811005539 A CN 201811005539A CN 109032384 B CN109032384 B CN 109032384B
Authority
CN
China
Prior art keywords
shaking
target
music
determining
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811005539.0A
Other languages
Chinese (zh)
Other versions
CN109032384A (en
Inventor
林肇堃
魏苏龙
麦绮兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811005539.0A priority Critical patent/CN109032384B/en
Publication of CN109032384A publication Critical patent/CN109032384A/en
Application granted granted Critical
Publication of CN109032384B publication Critical patent/CN109032384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Abstract

The embodiment of the application discloses a music playing control method, a device, a storage medium and wearable equipment, wherein the method comprises the following steps: acquiring shaking state information of a body part of a user wearing the wearable device; matching target music to be played according to the shaking state information; and controlling the target music to be played. Through adopting the technical scheme of this application, can be according to the state information that rocks of user's health position, for example the head rocks information, the arm rocks information or the foot rocks information etc. match corresponding target music of treating the broadcast and broadcast, solved among the prior art music broadcast control complex operation, not intelligent enough problem, optimized music broadcast control operation, also richened the function of wearable equipment, improved the user's of wearable equipment degree of adhesion.

Description

Music playing control method and device, storage medium and wearable device
Technical Field
The embodiment of the application relates to the technical field of intelligent wearable equipment, in particular to a music playing control method and device, a storage medium and wearable equipment.
Background
With the progress of society and the development of science and technology, the sizes of various components and parts are smaller and smaller, so that the components and parts can be integrated on wearable equipment.
Wearable equipment in the existing market can have an independent operating system like a smart phone, can run application programs, has more and more functions in the wearable equipment, provides convenience for life and work of people, can be used for people to make and receive calls, measure physiological parameters, and can also listen to music, watch videos, play games and the like. However, when a user listens to music by using the wearable device, the user needs to manually input a certain kind of songs or a certain song to be listened by the user, the operation is complicated, and the intelligence is low. Therefore, music control functions in wearable devices need to be optimized.
Disclosure of Invention
The embodiment of the application provides a music playing control method and device, a storage medium and a wearable device, which can optimize a music playing control scheme in the related technology.
In a first aspect, an embodiment of the present application provides a music playing control method, including:
acquiring shaking state information of a body part of a user wearing the wearable device;
matching target music to be played according to the shaking state information;
and controlling the target music to be played.
In a second aspect, an embodiment of the present application provides a music playback control apparatus, including:
the shaking state information acquisition module is used for acquiring shaking state information of a body part of a user wearing the wearable device;
the target music determining module is used for matching target music to be played according to the shaking state information;
and the target music playing module is used for controlling the playing of the target music.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the music playing control method provided in the first aspect.
In a fourth aspect, an embodiment of the present application provides a wearable device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor, when executing, implements the music playing control method provided in the first aspect.
The embodiment of the application provides a scheme for controlling music playing, which comprises the steps of obtaining the shaking state information of the body part of a user wearing wearable equipment; matching target music to be played according to the shaking state information; and controlling the target music to be played. Through adopting the technical scheme of this application, can be according to the state information that rocks of user's health position, for example the head rocks information, the arm rocks information or the foot rocks information etc. match corresponding target music of treating the broadcast and broadcast, solved among the prior art music broadcast control complex operation, not intelligent enough problem, optimized music broadcast control operation, also richened the function of wearable equipment, improved the user's of wearable equipment degree of adhesion.
Drawings
Fig. 1 is a flowchart of a music playing control method according to an embodiment of the present application;
fig. 2 is a flowchart of another music playing control method provided in the embodiment of the present application;
fig. 3 is a schematic entity diagram of smart glasses provided in an embodiment of the present application;
fig. 4 is a flowchart of another music playing control method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a music playing control device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a wearable device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another wearable device provided in the embodiments of the present application;
fig. 8 is a schematic entity diagram of a wearable device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of a music playing control method provided in an embodiment of the present application, where the method of this embodiment may be executed by a music playing control device, the device may be implemented by hardware and/or software, and the device may be disposed inside a wearable device as a part of the wearable device.
As shown in fig. 1, the music playing control method provided in this embodiment includes the following steps:
step 101, obtaining shaking state information of a body part of a user wearing the wearable device.
Wearable devices described in this implementation include, but are not limited to, smart glasses, smart helmets, smart gloves, smart bracelets, smart watches, smart rings, smart apparel, or smart shoes, among others.
Taking intelligent glasses as an example, the structure of the wearable device is simply introduced. The intelligent glasses comprise a glasses frame body and lenses. The eyeglass frame body comprises eyeglass legs and an eyeglass frame. Optionally, the inner sides of the glasses legs can be provided with breathing lamps, the breathing lamps can be LED lamps, and the breathing lamps can flash according to the frequency of the head movement of the intelligent glasses wearer. The glasses legs are also provided with a touch area (touch panel) and a bone conduction area. The touch control area is arranged on the outer side of the glasses legs, and a touch detection module is arranged in the touch control area and used for detecting touch operation of a user. For example, a touch sensor module is used to detect a touch operation by a user, and the touch sensor module is at a low level in an initial state and at a high level when there is a touch operation. In a scene that a user wears smart glasses, the side of the temple close to the face is defined as the inner side, and the side opposite to the inner side and far from the face is defined as the outer side. Bone conduction regions are provided on the temples near the ears. Wherein a bone conduction speaker or a bone conduction sensor is arranged in the bone conduction area. Set up heart rate sensor in the position that the mirror leg is close to face temple for acquire the heart rate information of wearing intelligent glasses user. Set up intelligent microphone on the picture frame, can the current ambient noise size of locating of intelligent recognition, can be based on the performance of ambient noise automatically regulated microphone. The frame is also provided with an acceleration sensor, a gyroscope and the like. In addition, the glasses frame and the nose support are also provided with an Electrooculogram (EOG) sensor for acquiring the eye state of the user. In addition, still be provided with the micro-processing district on the mirror leg, the treater sets up in the micro-processing district, is connected with devices such as above-mentioned touch detection module, bone conduction earphone, heart rate sensor, intelligent microphone, acceleration sensor, gyroscope, electrooculogram sensor electricity respectively for receive the data of treating, carry out data operation, data processing and output control command to corresponding device. It should be noted that the smart glasses may download the multimedia resource from the cloud for playing through the internet, and may also acquire the multimedia resource from the terminal device by establishing a communication connection with the terminal device, which is not limited in this application.
In this embodiment, the shake state information may be obtained when it is detected that the currently running application is a music playing program or when a gesture or voice of a user for starting the music playing program is detected, and of course, the step 101 may also be executed under other relevant conditions, which is not limited in this embodiment.
The body part of the user in this embodiment may include a head, a hand (including an arm, a palm, or a finger), a foot (including a sole or a toe), a waist, a hip, and a pectoral muscle, or may be a finer part, such as an eye, a lip, an ear, and correspondingly, if the eye shaking state information is blinking information of the eye.
Application scenarios applicable to the embodiments of the present application include, but are not limited to, the following scenarios: a user sits on a chair or stands on the chair in a non-motion state, only a body part wearing wearable equipment shakes, for example, a head wearing intelligent glasses shakes, a wrist wearing an intelligent bracelet shakes, and a foot wearing intelligent shoes shakes, and it can be understood that the shaking of the foot can also drive the head to correspondingly shake under the condition of wearing the intelligent glasses, so that the intelligent glasses can detect the shaking state information of the head shaking caused by the shaking of the foot; the user is in motion states such as running, rope skipping, and the wearing formula equipment can detect the motion state information of user's whole body or certain health part. The present embodiment is not limited to detecting the shake state information directly of a specific body part of the user, and may acquire shake state information of a body part generated by vibration generated by rubbing or tapping the body part, for example, a user taps or rubs his head with a finger to generate fine shake of the head.
The shaking state information in this embodiment may include a shaking rhythm, a shaking direction, and a shaking amplitude. The shaking directions can include horizontal shaking, vertical shaking and front-back shaking, certain shaking angles can be formed between the shaking directions of the body parts of the user and the horizontal direction, the vertical direction and the front-back direction, the shaking angles of the shaking directions and the vertical direction which are smaller than 45 degrees can be assigned to the vertical direction, the shaking angles of the shaking directions and the horizontal direction which are smaller than 45 degrees can be assigned to the horizontal direction, the shaking angles of the shaking directions and the front-back direction which are smaller than 45 degrees can be assigned to the front-back direction, the shaking directions of the shaking angles which are smaller than 45 degrees and the front-back direction can be considered as the vertical direction plus the front-back direction for the shaking angles of the shaking directions and the vertical direction which are also smaller than 45 degrees, and the shaking directions of various actual shaking angles of the user can be determined based on similar shaking direction determination rules.
The acquiring shaking state information of the body part of the user wearing the wearable device may include: acquiring the shaking state information through a motion sensor of the wearable device; or acquiring the shaking state information through a bone conduction microphone of the wearable device.
The motion sensor comprises an acceleration sensor and a gyroscope, the acceleration sensor can comprise a horizontal acceleration sensor and a gravity acceleration sensor, shaking operations of a user can be decomposed into shaking actions one by one, each shaking action is equivalent to one beat in music, shaking rhythms of each beat can be detected through the acceleration sensor and the gyroscope, shaking directions of each beat can be detected through the gyroscope, and shaking amplitudes of each beat can be detected through the acceleration sensor and the gyroscope. For example, taking "X" as a beat, the obtained rhythm information is "X-X _ X —", which corresponds to "la-la, la-", where "-" is the increasing line of the previous beat, and "_" is a pause, i.e., the beat is not voiced.
When a user taps or rubs his head, the pulses can vibrate through his skull, the bone conduction microphone in the smart glasses can acquire the vibration pulses through the skull, and can convert the vibration pulses into analog information and/or digital signals, the analog signals can be digitized through the analog-to-digital converter, and the processor can analyze the digital signals to generate shaking state information after receiving the digital signals.
And 102, matching target music to be played according to the shaking state information.
The shaking state information of the body part of the user wearing the wearable device in the set time (for example, in the sampling period) can be acquired in the embodiment, the shaking state information is matched with music in a preset song library, and if the currently acquired shaking state information is not enough to be matched with proper music, the shaking state information for a longer time can be further acquired until the target music is matched.
After the shake state information is matched with music in a preset song library, several pieces of music with high matching degree can be obtained, and the music with the highest matching degree can be used as target music or can be displayed to a user for selection by the user.
The matching method in this embodiment may include: and matching with a locally stored song library or matching with an online song library through a network communication module of the wearable device.
Wherein the shaking information may include a shaking rhythm, and the step may include: and determining target music to be played, wherein the rhythm information of the target music is matched with the shaking rhythm according to the shaking rhythm. The tempo of shaking may include the length of beats, pauses between beats. The shaking frequency of the body part of the user can be acquired through the acceleration sensor and the gyroscope sensor of the wearable device, and then the shaking rhythm is obtained.
In this implementation, the rhythm information of each piece of music can be preset and stored, and the rhythm information can be carried in a music file or generated by the wearable device according to corresponding shaking operation previously made by the user when listening to the music.
And 103, controlling the target music to be played.
The music playing control method provided by the embodiment can match and play corresponding target music to be played according to the shaking state information of the body part of the user, such as head shaking information, arm shaking information or foot shaking information, so that the problems of complex music playing control operation and insufficient intelligence in the prior art are solved, the music playing control operation is optimized, the functions of wearable equipment are enriched, and the user adhesion degree of the wearable equipment is improved.
In some embodiments, obtaining the shaking state information of the body part of the user wearing the wearable device is performed; and determining the rhythm information of the music before determining the target music to be played, which is matched with the shaking rhythm, according to the shaking rhythm. That is, the music playing control method provided in this embodiment may further include the following steps: if the body part is currently in a music playing state, acquiring the current shaking rhythm of the body part of a user wearing the wearable device; and generating rhythm information of the currently played music according to the current shaking rhythm. Illustratively, cadence information of "X-X _ X X —" is generated.
The advantage of this step set in this way is: because everyone's habit and personality are different, even the state information that rocks that shows of listening to the same music is also different, this embodiment is through according to the rhythm of rocking that this user oneself shows when listening to music, and this music is to this user's rhythm information is generated, can so when carrying out music matching, because the music that the habit of rocking of user matches more accords with user's demand, has improved the degree of accuracy that the music matches. Furthermore, the shaking rhythm information of the user when listening to music and the historical data of the rhythm information of the currently played music can be used as training samples, the training samples are trained based on a machine learning method to generate a music rhythm information determining model, and after the obtained shaking rhythm of the user is input into the music rhythm determining model, the music rhythm information corresponding to the shaking rhythm can be obtained. Further, the music rhythm information determination model can be corrected according to the adjustment instruction of the user on the music rhythm information, and the accuracy of the music rhythm information determination model is improved.
Fig. 2 is a flowchart of another music playing control method according to an embodiment of the present application. The mouth shape information of the user can be acquired while the shaking state information of the body part of the user is acquired, and the target music to be played is matched according to the shaking state information and the mouth shape information. As shown in fig. 2, the method provided by this embodiment includes the following steps:
step 201, obtaining shaking state information of a body part of a user wearing the wearable device, and mouth shape information of the user.
It should be noted that, in this embodiment, the simultaneous reference is not an absolute same time, but refers to that before the operation of matching the target music is performed, not only the shake state information of the body part of the user is obtained, but also the mouth shape information of the user is obtained, for example, the head shake information of the user and the mouth shape information of the user are obtained, and the two obtaining operations may be performed sequentially or in parallel, which is not limited in this embodiment.
The mouth shape information can shoot the face or the mouth of a person through a camera on the wearable device, then the processor of the wearable device processes the mouth shape information to obtain rhythm information and/or voice information, or the shot face or mouth image is sent to a server, the server processes the mouth shape information to obtain rhythm information and/or voice information, and the rhythm information and/or voice information is returned to the wearable device; the intelligent glasses can also detect a vibration signal of a skull caused by the mouth shape through the bone conduction microphone, convert the vibration signal into a digital signal, further process the digital signal to obtain rhythm information and/or voice information corresponding to the mouth shape information, or send the digital signal to the server, and return the rhythm information and/or voice information corresponding to the mouth shape information to the intelligent glasses after the rhythm information and/or voice information corresponding to the mouth shape information is obtained through processing of the server.
Taking the smart glasses as an example, a mode of shooting the mouth shape information of a user wearing the smart glasses through a camera is described. The camera can be a miniature camera and is arranged at the position of the upper frame of the mirror frame. Fig. 3 is a schematic entity diagram of smart glasses provided in an embodiment of the present application. As shown in fig. 3, a protruding portion 310 is provided on the upper frame of the smart glasses, and the protruding portion 310 extends a set length d from the plane a where the frame is located toward a direction away from the frame. A front camera 6071 is arranged on a first plane 311 parallel to the plane A where the lens frame is located in the convex part 310. The distance between the second plane (which can be considered as the plane in contact with the frame) of the convex part 310 parallel to the plane a of the frame is smaller than the distance between the first plane and the frame. The protrusion 310 includes a rotation portion 312, the rotation portion 312 extends from one end of the protrusion 310 to form a protrusion body, and the rotation portion 312 coincides with a central axis of the protrusion body, and the rotation portion 312 can rotate around the central axis. A rear camera 6072 is provided on the rotating portion 312. The rear camera 6072 can be controlled to rotate to photograph target objects at different angles. For example, when the eye image needs to be shot, the plane where the rear camera is located is rotated to be parallel to the first plane. When the mouth shape image needs to be shot, the plane where the rear camera is located is controlled to rotate to form an included angle of 45-60 degrees with the first plane along the direction of the arrow, and therefore the mouth shape image can be shot.
Illustratively, the mouth shape image of a user wearing the intelligent glasses is shot through the rear camera according to a set period, and the rear camera sends the mouth shape image to the processor. The processor identifies each mouth shape image and determines mouth shape information. For example, the processor counts the mouth shape change frequency within a preset time period, or the processor records the amplitude value of each mouth shape within the preset time period, etc.
And step 202, matching target music to be played according to the shaking state information and the mouth shape information.
The matching mode in this embodiment includes a local song library matching mode and an online song library matching mode.
Illustratively, the target music is matched by a local song library or an on-line song library according to the shaking tempo and the mouth shape change frequency. That is, the tempo of shaking is "X-X _ X —", and the frequency of mouth shape change is matched with the tempo of shaking, the target music is matched based on the tempo of shaking. For example, the target music matching the tempo of the shake may be selected from a local song library or an online song library.
And step 203, controlling the target music to be played.
According to the music playing control method provided by the embodiment, the mouth shape information of the user is acquired while the shaking state information of the user is acquired, so that the target music is matched together according to the shaking state information and the mouth shape information, and the accuracy and the speed of matching the target music can be improved.
Fig. 4 is a flowchart of another music playing control method according to an embodiment of the present application. As shown in fig. 4, the music playing control method provided in this embodiment includes the following steps:
step 301, obtaining a shaking rhythm of a body part of a user wearing the wearable device in a sampling period, and a shaking direction and a shaking amplitude of each beat in the shaking rhythm.
The sampling period may be a predetermined time, such as 3 seconds, 5 seconds, etc.
The shaking rhythm is formed by combining shaking actions of multiple beats. The motion sensor of usable wearing formula equipment gathers and rocks the rhythm of rocking that the action is constituteed by clapping more to and each claps and rocks the direction of rocking and the range of rocking that the action corresponds. Wherein the shaking direction comprises horizontal shaking, vertical shaking and front-back shaking. The shaking amplitude may be a shaking amplitude in the shaking direction of the beat.
And step 302, matching target music to be played according to the shaking rhythm.
And matching the shaking rhythm with music in a local song library or an online song library, and finding the music with the highest matching degree of rhythm information and the shaking rhythm as target music.
Step 303, determining a target playing type according to the shaking direction and the shaking amplitude of each beat in the sampling period, wherein the playing types comprise classical, rock, ballad and popular.
The intermediate playback types of the present embodiment are classified into classical, rock, ballad, and popular types, and it is understood that one or more of the playback types may be added or subtracted according to the user's preference and personal habits.
The user is when listening the music of different broadcast types, and the state information of shaking that shows is different, and this embodiment is through gathering user's the direction of shaking and the range of shaking, come according to the direction of shaking and the range of shaking, confirms target broadcast type, has grasped the detail of user experience, and the more deep demand that closes close to the user.
In general, for rock-and-roll type music, the user's head is shaken strongly in the up-down direction (vertical direction), and is frequently enjoyed by classical type music users, and the user's head is shaken strongly in the left-right direction (horizontal direction), while for popular type music and ballad type music, the degree of dispersion of the shaking in the vertical direction, horizontal direction, and front-back direction is generally small.
In some embodiments, this step may include: respectively calculating the beat numbers of horizontal shaking, vertical shaking and front-back shaking in the shaking direction in the sampling period; if the number of beats of the vertical shaking is greater than the number of beats of the front shaking and the back shaking and greater than the number of beats of the horizontal shaking, determining that the target playing type is rock and roll; if the variance of the horizontal shaking beat number, the vertical shaking beat number and the front and back shaking beat numbers is within a first preset range, determining that the target playing type is popular; if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a second preset range, determining that the target playing type is an ballad, wherein the second preset range is smaller than the first preset range; and if the beat number of the horizontal shaking is greater than the beat number of the front and back shaking and greater than the beat number of the vertical shaking, determining that the target playing type is classical. Alternatively, the average value of the shaking amplitude of each shaking motion in the sampling period is calculated. If the average value is within a first set range, determining that the target playing type is rock; if the average value is within a second set range, determining that the target playing type is popular; if the average value is within a third set range, determining that the target playing type is the ballad; if the average value is within a fourth set range, determining that the target playing type is classical; the first setting range is larger than the second setting range, the second setting range is larger than the third setting range, and the third setting range is larger than the fourth setting range. Alternatively, if the number of beats of vertical shaking is greater than the number of beats of front-back shaking and greater than the number of beats of horizontal shaking, and the average value of the shaking amplitudes of the beats is within a first set range, determining that the target playing type is rock and roll; if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a first preset range, and the average value of the shaking amplitudes of the beats is within a second preset range, determining that the target playing type is popular; if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a second preset range, and or the average value of the shaking amplitudes of the beats is within a third preset range, determining that the target playing type is the balladry; if the beat number of the horizontal shaking is larger than the beat number of the front and back shaking and larger than the beat number of the vertical shaking, and the average value of the shaking amplitudes of all the beats is within a fourth set range, determining that the target playing type is classical, wherein the first set range is larger than the second set range, the second set range is larger than the third set range, and the third set range is larger than the fourth set range.
Generally, for music of rock-and-roll type, the shake direction and shake amplitude of the head of the user corresponding to each beat are changed, and the change frequency is higher, and for music of popular type, the change frequency is lower than that of rock-and-roll type, and for music of ballad type, the change frequency is lower than that of popular type, and for music of classical type.
In some embodiments, this step may include: acquiring a first change frequency in a shaking direction and a second change frequency in a shaking amplitude between beats in a sampling period; if the first change frequency and the second change frequency are within a fifth set range, determining that the target playing type is rock; if the first change frequency and the second change frequency are within a sixth set range, determining that the target playing type is popular; if the first change frequency and the second change frequency are within a seventh set range, determining that the target playing type is a ballad; if the first change frequency and the second change frequency are within an eighth set range, determining that the target playing type is classical; the fifth setting range is larger than the sixth setting range, the sixth setting range is larger than the seventh setting range, and the seventh setting range is larger than the eighth setting range. Because the shaking action of the user head in each beat is not the repeated action along the same direction, the change rate of the shaking direction and the change rate of the shaking amplitude are comprehensively considered, the shaking action can be interpreted more comprehensively, and the matching accuracy can be improved based on the fact that the shaking action is matched with the target playing type.
The first variation frequency and the second variation frequency are within a fifth setting range, the first variation frequency and the second variation frequency may be within the fifth setting range, or a weighted sum of the first variation frequency and the second variation frequency may be within the fifth setting range. Correspondingly, the first change frequency and the second change frequency are within a sixth setting range, the first change frequency and the second change frequency are within a seventh setting range, and the first change frequency and the second change frequency are within an eighth setting range, similarly.
Alternatively, in some embodiments, this step may include: and acquiring a first change frequency in the shaking direction among the beats in the sampling period. And if the first change frequency is within a fifth set range, determining that the target playing type is rock and roll. And if the first change rate is within a sixth set range, determining that the target playing type is popular. And if the first change rate is within a seventh set range, determining that the target playing type is the ballad. And if the first change rate is within the eighth set range, determining that the target playing type is classical. The relationship among the fifth setting range, the sixth setting range, the seventh setting range and the eighth setting range is as described in the above example, and is not described herein again. The design reduces the data amount required to be processed by the processor and improves the matching speed of the target playing type.
Alternatively, in some embodiments, this step may include: and acquiring a second change frequency in the shaking amplitude among the beats in the sampling period. And if the second change frequency is within a fifth set range, determining that the target playing type is rock. And if the second change rate is within a sixth set range, determining that the target playing type is popular. And if the second change rate is within a seventh set range, determining that the target playing type is the ballad. And if the second change rate is within the eighth set range, determining that the target playing type is classical. The relationship among the fifth setting range, the sixth setting range, the seventh setting range and the eighth setting range is as described in the above example, and is not described herein again. The design can also reduce the data amount required to be processed by the processor and improve the matching speed of the target playing type.
And step 304, playing the target music according to the target playing type.
According to the music playing control method provided by the embodiment, the target music to be played is matched according to the shaking rhythm, the shaking direction and the shaking frequency of the user, the playing type of the target music is determined according to the shaking direction and the shaking frequency, so that the playing control of the music is closer to the personalized requirement of the user, and the interest of the music playing control is improved.
Fig. 5 is a schematic structural diagram of a music playing control apparatus provided in an embodiment of the present application, where the apparatus may be implemented by software and/or hardware and integrated in a wearable device. As shown in fig. 5, the apparatus includes a shake state information acquisition module 41, a target music determination module 42, and a target music playing module 43.
The shaking state information acquiring module 41 is used for acquiring shaking state information of a body part of a user wearing the wearable device;
the target music determining module 42 is configured to match target music to be played according to the shaking state information;
the target music playing module 43 is configured to control playing of the target music.
The device that this embodiment provided can rock state information according to user's health position, for example the head rocks information, the arm rocks information or the foot rocks information etc. matches corresponding target music of treating the broadcast and carries out the broadcast, has solved among the prior art music broadcast control complex operation, not intelligent enough problem, has optimized music broadcast control operation, has also richened the function of wearing formula equipment, has improved the user's of wearing formula equipment degree of adhesion.
Optionally, the shake state information includes a shake tempo, and the target music determination module is configured to:
and determining target music to be played, wherein the rhythm information of the target music is matched with the shaking rhythm according to the shaking rhythm.
Optionally, the apparatus further includes a music tempo information determining module, specifically configured to:
if the body part is currently in a music playing state, acquiring the current shaking rhythm of the body part of a user wearing the wearable device;
and generating rhythm information of the currently played music according to the current shaking rhythm.
Optionally, the shaking state information obtaining module is specifically configured to:
acquiring the shaking state information through a motion sensor of the wearable device; alternatively, the first and second electrodes may be,
and acquiring the shaking state information through a bone conduction microphone of the wearable device.
Optionally, the shake state information obtaining module is further configured to obtain the mouth shape information of the user while obtaining the shake state information of the body part of the user, and the target music determining module is specifically configured to:
and matching the target music to be played according to the shaking state information and the mouth shape information.
Optionally, the shaking state information comprises a shaking direction and a shaking amplitude, the shaking direction comprises horizontal shaking, vertical shaking and front and back shaking, and the device further comprises:
the shaking direction amplitude acquisition module is used for acquiring the shaking direction and the shaking amplitude of each shaking action in the sampling period;
the target playing type determining module is used for determining a target playing type according to the shaking direction and the shaking amplitude of each beat in the sampling period, wherein the playing types comprise classical, rock, ballad and popular;
and the second target music playing module is used for playing the target music according to the target playing type.
Optionally, the target play type determining module is specifically configured to:
respectively calculating the beat numbers of horizontal shaking, vertical shaking and front-back shaking in the shaking direction in the sampling period;
if the number of beats of the vertical shaking is greater than the number of beats of the front shaking and the back shaking and greater than the number of beats of the horizontal shaking, and/or the average value of the shaking amplitudes of all beats is within a first set range, determining that the target playing type is rock and roll;
if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a first preset range and/or the average value of the shaking amplitudes of each beat is within a second preset range, determining that the target playing type is popular;
if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a second preset range and/or the average value of the shaking amplitudes of the beats is within a third preset range, determining that the target playing type is a balladry, wherein the second preset range is smaller than the first preset range;
if the beat number of the horizontal shaking is larger than the beat number of the front shaking and the back shaking and is larger than the beat number of the vertical shaking, and/or the average value of the shaking amplitudes of all beats is within a fourth set range, determining that the target playing type is classical;
the first setting range is larger than the second setting range, the second setting range is larger than the third setting range, and the third setting range is larger than the fourth setting range.
Optionally, the target play type determining module is specifically configured to:
acquiring a first change frequency in a shaking direction and a second change frequency in a shaking amplitude between beats in a sampling period;
if the first change frequency and/or the second change frequency are/is within a fifth set range, determining that the target playing type is rock;
if the first change frequency and/or the second change frequency are/is within a sixth set range, determining that the target playing type is popular;
if the first variation frequency and/or the second variation frequency are/is within a seventh set range, determining that the target playing type is a ballad;
if the first change frequency and/or the second change frequency are/is within an eighth set range, determining that the target playing type is classical;
the fifth setting range is larger than the sixth setting range, the sixth setting range is larger than the seventh setting range, and the seventh setting range is larger than the eighth setting range.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a music playback control method, the method including: acquiring shaking state information of a body part of a user wearing the wearable device; matching target music to be played according to the shaking state information; and controlling the target music to be played.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the music playing control operation described above, and may also perform related operations in the music playing control method provided in any embodiments of the present application.
The embodiment of the application provides wearable equipment, and the music playing control device provided by the embodiment of the application can be integrated in the wearable equipment. Fig. 6 is a schematic structural diagram of a wearable device according to an embodiment of the present application. The wearable device 500 may include: a memory 510 for storing executable program code; the processor 520 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 510, for performing: acquiring shaking state information of a body part of a user wearing the wearable device; matching target music to be played according to the shaking state information; and controlling the target music to be played.
The memory and the processor listed in the above examples are part of components of the wearable device, and the wearable device may further include other components. Fig. 7 is a block diagram of a wearable device according to an embodiment of the present disclosure, and fig. 8 is a schematic entity diagram of a wearable device according to an embodiment of the present disclosure. As shown in fig. 7 and 8, the wearable device may include: the device includes a memory 601, a processor (CPU) 602 (hereinafter, referred to as CPU), a display Unit 603, a touch panel 604, a heart rate detection module 605, a distance sensor 606, a camera 607 (including a front camera 6071 and a rear camera 6072), a bone conduction speaker 608, a microphone 609 (which may be a bone conduction microphone, for example), a breathing lamp 610, and a motion sensor 612. These components communicate over one or more communication buses or signal lines 611 (hereinafter also referred to as internal transmission lines).
It should be understood that the illustrated wearable device is merely one example, and that the wearable device may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The wearable device integrated with the music playing control device provided in this embodiment is described in detail below, and the wearable device takes smart glasses as an example.
A memory 601, the memory 601 being accessible by the CPU602, the memory 601 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
The display component 603 can be used for displaying image data and a control interface of an operating system, the display component 603 is embedded in a frame of the smart glasses, an internal transmission line 611 is arranged inside the frame, and the internal transmission line 611 is connected with the display component 603.
And a touch panel 604, the touch panel 604 being disposed at an outer side of at least one smart glasses temple for acquiring touch data, the touch panel 604 being connected to the CPU602 through an internal transmission line 611. The touch panel 604 can detect finger sliding and clicking operations of the user, and accordingly transmit the detected data to the processor 602 for processing to generate corresponding control commands, which may be, for example, a left shift command, a right shift command, an up shift command, a down shift command, and the like. For example, the display component 603 can display the virtual image data transmitted by the processor 602, and the virtual image data can be correspondingly changed according to the user operation detected by the touch panel 604, specifically, the screen switching can be performed, and when a left shift instruction or a right shift instruction is detected, the previous or next virtual image screen is correspondingly switched; when the display section 603 displays video play information, the left shift instruction may be to perform playback of the play content, and the right shift instruction may be to perform fast forward of the play content; when the display part 603 displays editable text content, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may be displacement operations on a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display component 603 is a game moving picture, the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction may be for controlling an object in a game, for example, in an airplane game, the flying direction of the airplane may be controlled by the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction, respectively; when the display part 603 can display video pictures of different channels, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction can perform switching of different channels, wherein the up shift instruction and the down shift instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 603 displays a still picture, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may switch between different pictures, where the left shift instruction may be to a previous picture, the right shift instruction may be to a next picture, the up shift instruction may be to a previous picture set, and the down shift instruction may be to a next picture set. The touch panel 604 can also be used to control display switches of the display portion 603, for example, when the touch area of the touch panel 604 is pressed for a long time, the display portion 603 is powered on to display an image interface, when the touch area of the touch panel 604 is pressed for a long time again, the display portion 603 is powered off, and when the display portion 603 is powered on, the brightness or resolution of an image displayed in the display portion 603 can be adjusted by performing a slide-up and slide-down operation on the touch panel 604.
Heart rate detection module 605 for survey user's heart rate data, the heart rate indicates the heartbeat number of minute, and this heart rate detection module 605 sets up at the mirror leg inboard. Specifically, the heart rate detection module 605 may obtain the human body electrocardiographic data by using the dry electrode in an electric pulse measurement manner, and determine the heart rate according to the amplitude peak value in the electrocardiographic data; this rhythm of heart detection module 605 can also be by adopting the light transmission and the light receiver component of photoelectric method measurement rhythm of heart, corresponding, and this rhythm of heart detection module 605 sets up in the mirror leg bottom, the earlobe department of human auricle. Heart rate detection module 605 can be corresponding after gathering heart rate data send to processor 602 and carry out data processing and have obtained the current heart rate value of wearer, in an embodiment, processor 602 can show this heart rate value in display component 603 in real time after determining the heart rate value of user, optional processor 602 can be corresponding trigger the alarm when determining that heart rate value is lower (for example less than 50) or higher (for example more than 100), send this heart rate value and/or the alarm information that generates to the server through communication module simultaneously.
The distance sensor 606 may be disposed on the frame, the distance sensor 606 is used for sensing a distance from a human face to the frame, and the distance sensor 606 may be implemented by using an infrared sensing principle. Specifically, the distance sensor 606 transmits the acquired distance data to the processor 602, and the processor 602 controls the brightness of the display part 603 according to the distance data. Illustratively, the processor 602 controls the display 603 to be in an on state when it determines that the distance detected by the distance sensor 606 is less than 5 cm, and controls the display 604 to be in an off state when it determines that the distance sensor detects an object approaching.
The breathing lamp 610 may be disposed at an edge of the frame, and when the display component 603 turns off the display screen, the breathing lamp 610 may be turned on to generate a gradually changing light-dark effect according to the control of the processor 602.
The camera 607 may be a front camera module 6071 disposed at the upper frame of the frame and used for collecting image data in front of the user, or may be a rear camera module 6072 used for collecting eyeball information of the user (since the plane where the rear camera 6072 is located faces the glasses when the glasses are shot, the rear camera 6072 cannot be seen in the figure, and is shown by a dotted line), or may be a combination of the two. This rear camera module 6072 can rotate to it can also gather user's mouth shape information to control the rotation of rear camera 6072 according to actual need and set for a set angle. Specifically, when the camera 607 collects the front image, the collected image is sent to the processor 602 for recognition and processing, and a corresponding trigger event is triggered according to the recognition result. Illustratively, when a user wears the wearable device at home, by identifying the collected front image, if a furniture item is identified, correspondingly inquiring whether a corresponding control event exists, if so, correspondingly displaying a control interface corresponding to the control event in the display part 603, and controlling the corresponding furniture item through the touch panel 604 by the user, wherein the furniture item and the smart glasses are in network connection through bluetooth or wireless ad hoc network; when a user wears the wearable device outdoors, a target recognition mode can be started correspondingly, the target recognition mode can be used for recognizing specific people, the camera 607 sends collected images to the processor 602 for face recognition processing, if preset faces are recognized, sound broadcasting can be performed through a speaker integrated with the smart glasses correspondingly, the target recognition mode can also be used for recognizing different plants, for example, the processor 602 records current images collected by the camera 607 according to touch operation of the touch panel 604 and sends the current images to the server for recognition through the communication module, the server recognizes the plants in the collected images and feeds back related plant names to the smart glasses, and feedback data are displayed in the display component 603. The camera 607 may also be configured to capture an image of an eye of a user, such as an eyeball, and generate different control instructions by recognizing rotation of the eyeball, for example, the eyeball rotates upward to generate an upward movement control instruction, the eyeball rotates downward to generate a downward movement control instruction, the eyeball rotates leftward to generate a leftward movement control instruction, and the eyeball rotates rightward to generate a rightward movement control instruction, where the display component 603 may display virtual image data transmitted by the processor 602, where the virtual image data may be changed according to a control instruction generated according to a change in movement of the eyeball of the user detected by the camera 607, and specifically, may perform frame switching, and when a leftward movement control instruction or a rightward movement control instruction is detected, switch to a previous or next virtual image frame; when the display part 603 displays video playing information, the left control instruction may be to perform playback of the playing content, and the right control instruction may be to perform fast forward of the playing content; when the display part 603 displays editable text content, the left movement control instruction, the right movement control instruction, the upward movement control instruction, and the downward movement control instruction may be displacement operations of a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display component 603 is a game animation picture, the left movement control command, the right movement control command, the upward movement control command and the downward movement control command may be used to control an object in a game, for example, in an airplane game, the flying direction of an airplane may be controlled by the left movement control command, the right movement control command, the upward movement control command and the downward movement control command respectively; when the display part 603 can display video pictures of different channels, the left shift control instruction, the right shift control instruction, the up shift control instruction, and the down shift control instruction can switch different channels, wherein the up shift control instruction and the down shift control instruction can be switching to a preset channel (such as a common channel used by a user); when the display part 603 displays a still picture, the left shift control command, the right shift control command, the up shift control command, and the down shift control command may switch between different pictures, where the left shift control command may be to a previous picture, the right shift control command may be to a next picture, the up shift control command may be to a previous picture set, and the down shift control command may be to a next picture set.
And a bone conduction speaker 608, the bone conduction speaker 608 being provided on an inner wall side of at least one temple, for converting the received audio signal transmitted from the processor 602 into a vibration signal. The bone conduction speaker 608 transmits sound to the inner ear of the human body through the skull, converts an electrical signal of the audio frequency into a vibration signal, transmits the vibration signal into a cochlea of the skull, and is sensed by auditory nerves. The bone conduction speaker 608 is used as a sound production device, so that the thickness of a hardware structure is reduced, the weight is lighter, meanwhile, no electromagnetic radiation is influenced by the electromagnetic radiation, and the bone conduction speaker has the advantages of noise resistance, water resistance and capability of liberating double ears.
A microphone 609, which may be located on the lower rim of the frame, is used to capture external (user, ambient) sounds and transmit them to the processor 602 for processing. Illustratively, the microphone 609 collects the sound emitted by the user and performs voiceprint recognition through the processor 602, and if the sound is recognized as a voiceprint for authenticating the user, the subsequent voice control can be correspondingly received, specifically, the user can emit voice, the microphone 609 sends the collected voice to the processor 602 for recognition so as to generate a corresponding control instruction according to the recognition result, such as "power on", "power off", "display brightness increase", "display brightness decrease", and the processor 602 executes a corresponding control process according to the generated control instruction subsequently.
The motion sensor 612, including a distance sensor, an acceleration sensor, a gyroscope, etc., may detect shaking state information of the body part, such as a shaking rhythm, a shaking direction, a shaking amplitude, etc.
The wearing formula equipment that this application embodiment provided can be according to the state information that rocks of user's health position, for example the head rocks information, the arm rocks information or the foot rocks information etc. matches corresponding target music of treating the broadcast and plays, has solved among the prior art music broadcast control complex operation, not intelligent enough problem, has optimized music broadcast control operation, has also richened the function of wearing formula equipment, has improved the user's of wearing formula equipment degree of adhesion.
The music playing control device, the storage medium and the wearable device provided in the above embodiments can execute the music playing control method provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. For details of the music playing control method provided in any of the embodiments of the present application, reference may be made to the above-mentioned embodiments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (11)

1. A music playback control method, comprising:
acquiring shaking state information of a body part of a user wearing the wearable device;
matching target music to be played according to the shaking state information;
controlling the target music to be played;
the shaking state information comprises a shaking direction and a shaking amplitude, the shaking direction comprises horizontal shaking, vertical shaking and front-back shaking, and the method further comprises the following steps:
acquiring the shaking direction and the shaking amplitude of each shaking action in a sampling period;
determining a target playing type according to the shaking direction and the shaking amplitude of each beat in the sampling period, wherein the playing types comprise classical, rock, ballad and popular;
playing the target music according to the target playing type;
the determining the target playing type according to the shaking direction and the shaking amplitude of each beat in the sampling period comprises the following steps:
respectively calculating the beat numbers of horizontal shaking, vertical shaking and front-back shaking in the shaking direction in the sampling period;
if the number of beats of the vertical shaking is greater than the number of beats of the front shaking and the back shaking and greater than the number of beats of the horizontal shaking, determining that the target playing type is rock and roll;
if the variance of the horizontal shaking beat number, the vertical shaking beat number and the front and back shaking beat numbers is within a first preset range, determining that the target playing type is popular;
if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a second preset range, determining that the target playing type is an ballad, wherein the second preset range is smaller than the first preset range;
and if the beat number of the horizontal shaking is greater than the beat number of the front and back shaking and greater than the beat number of the vertical shaking, determining that the target playing type is classical.
2. The method of claim 1, wherein the shake status information comprises a shake tempo, and wherein matching the target music to be played according to the shake status information comprises:
and determining target music to be played, wherein the rhythm information of the target music is matched with the shaking rhythm according to the shaking rhythm.
3. The method of claim 2, further comprising:
if the body part is currently in a music playing state, acquiring the current shaking rhythm of the body part of a user wearing the wearable device;
and generating rhythm information of the currently played music according to the current shaking rhythm.
4. The method of claim 1, wherein obtaining the sway status information of the body part of the user wearing the wearable device comprises:
acquiring the shaking state information through a motion sensor of the wearable device; alternatively, the first and second electrodes may be,
and acquiring the shaking state information through a bone conduction microphone of the wearable device.
5. The method of claim 1, wherein the obtaining of the mouth shape information of the user is performed simultaneously with the obtaining of the shaking state information of the body part of the user, and the matching of the target music to be played according to the shaking state information comprises:
and matching the target music to be played according to the shaking state information and the mouth shape information.
6. The method of claim 1, wherein determining the target playback type according to the shake direction and shake amplitude of each beat in the sampling period comprises:
calculating the average value of the shaking amplitude of each shaking motion in the sampling period;
if the average value is within a first set range, determining that the target playing type is rock;
if the average value is within a second set range, determining that the target playing type is popular;
if the average value is within a third set range, determining that the target playing type is the ballad;
if the average value is within a fourth set range, determining that the target playing type is classical;
the first setting range is larger than a second setting range, the second setting range is larger than a third setting range, and the third setting range is larger than a fourth setting range.
7. The method of claim 1, wherein determining the target playback type according to the shake direction and shake amplitude of each beat in the sampling period comprises:
acquiring a first change frequency in a shaking direction and a second change frequency in a shaking amplitude between beats in a sampling period;
if the first change frequency and the second change frequency are within a fifth set range, determining that the target playing type is rock;
if the first change frequency and the second change frequency are within a sixth set range, determining that the target playing type is popular;
if the first change frequency and the second change frequency are within a seventh set range, determining that the target playing type is a ballad;
if the first change frequency and the second change frequency are within an eighth set range, determining that the target playing type is classical;
the fifth setting range is larger than the sixth setting range, the sixth setting range is larger than the seventh setting range, and the seventh setting range is larger than the eighth setting range.
8. The method of claim 1, wherein the wearable device comprises smart glasses.
9. A music playback control apparatus, comprising:
the shaking state information acquisition module is used for acquiring shaking state information of a body part of a user wearing the wearable device;
the target music determining module is used for matching target music to be played according to the shaking state information;
the target music playing module is used for controlling the playing of the target music;
the shaking state information comprises a shaking direction and a shaking amplitude, the shaking direction comprises horizontal shaking, vertical shaking and front and back shaking, and the device further comprises:
the shaking direction amplitude acquisition module is used for acquiring the shaking direction and the shaking amplitude of each shaking action in the sampling period;
the target playing type determining module is used for determining a target playing type according to the shaking direction and the shaking amplitude of each beat in the sampling period, wherein the playing types comprise classical, rock, ballad and popular;
the second target music playing module is used for playing the target music according to the target playing type;
the target playing type determining module is specifically used for respectively calculating the beat numbers of horizontal shaking, vertical shaking and front-back shaking in the shaking direction in the sampling period; if the number of beats of the vertical shaking is greater than the number of beats of the front shaking and the back shaking and greater than the number of beats of the horizontal shaking, determining that the target playing type is rock and roll; if the variance of the horizontal shaking beat number, the vertical shaking beat number and the front and back shaking beat numbers is within a first preset range, determining that the target playing type is popular; if the variance of the horizontal shaking beats, the vertical shaking beats and the front and back shaking beats is within a second preset range, determining that the target playing type is an ballad, wherein the second preset range is smaller than the first preset range; and if the beat number of the horizontal shaking is greater than the beat number of the front and back shaking and greater than the beat number of the vertical shaking, determining that the target playing type is classical.
10. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the music playback control method according to any one of claims 1 to 8.
11. A wearable device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the music playback control method of any of claims 1-8 when executing the computer program.
CN201811005539.0A 2018-08-30 2018-08-30 Music playing control method and device, storage medium and wearable device Active CN109032384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811005539.0A CN109032384B (en) 2018-08-30 2018-08-30 Music playing control method and device, storage medium and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811005539.0A CN109032384B (en) 2018-08-30 2018-08-30 Music playing control method and device, storage medium and wearable device

Publications (2)

Publication Number Publication Date
CN109032384A CN109032384A (en) 2018-12-18
CN109032384B true CN109032384B (en) 2021-09-28

Family

ID=64625923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811005539.0A Active CN109032384B (en) 2018-08-30 2018-08-30 Music playing control method and device, storage medium and wearable device

Country Status (1)

Country Link
CN (1) CN109032384B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083329A (en) * 2019-04-24 2019-08-02 深圳传音通讯有限公司 Terminal adjusting method and terminal
CN111984818A (en) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 Singing following recognition method and device, storage medium and electronic equipment
CN110232911B (en) * 2019-06-13 2022-04-05 南京地平线集成电路有限公司 Singing following recognition method and device, storage medium and electronic equipment
WO2021044219A2 (en) * 2019-07-13 2021-03-11 Solos Technology Limited Hardware architecture for modularized eyewear systems apparatuses, and methods
CN111752388A (en) * 2020-06-19 2020-10-09 深圳振科智能科技有限公司 Application control method, device, equipment and storage medium
CN114153308B (en) * 2020-09-08 2023-11-21 阿里巴巴集团控股有限公司 Gesture control method, gesture control device, electronic equipment and computer readable medium
CN113160848A (en) * 2021-05-07 2021-07-23 网易(杭州)网络有限公司 Dance animation generation method, dance animation model training method, dance animation generation device, dance animation model training device, dance animation equipment and storage medium
CN113867524A (en) * 2021-09-10 2021-12-31 安克创新科技股份有限公司 Control method and device and intelligent audio glasses
CN215729152U (en) * 2021-10-09 2022-02-01 深圳市深科创投科技有限公司 Intelligent audio glasses with acceleration sensor
CN114816199A (en) * 2022-04-29 2022-07-29 西安歌尔泰克电子科技有限公司 Control method of wearable device, wearable device and computer storage medium
CN115185085A (en) * 2022-07-18 2022-10-14 佛山理成科技有限公司 Intelligent glasses and control method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007226935A (en) * 2006-01-24 2007-09-06 Sony Corp Audio reproducing device, audio reproducing method, and audio reproducing program
US9478229B2 (en) * 2013-12-10 2016-10-25 Massachusetts Institute Of Technology Methods and apparatus for recording impulsive sounds
CN104867506B (en) * 2015-04-08 2017-07-14 小米科技有限责任公司 The method and apparatus for automatically controlling music
CN108415764A (en) * 2018-02-13 2018-08-17 广东欧珀移动通信有限公司 Electronic device, game background music matching process and Related product

Also Published As

Publication number Publication date
CN109032384A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109032384B (en) Music playing control method and device, storage medium and wearable device
US11450073B1 (en) Multi-user virtual and augmented reality tracking systems
CN109120790B (en) Call control method and device, storage medium and wearable device
CN109259724B (en) Eye monitoring method and device, storage medium and wearable device
TWI658377B (en) Robot assisted interaction system and method thereof
US11205408B2 (en) Method and system for musical communication
US11247021B2 (en) Craniaofacial emotional response influencer for audio and visual media
CN109119057A (en) Musical composition method, apparatus and storage medium and wearable device
JP2020039029A (en) Video distribution system, video distribution method, and video distribution program
CN109145847B (en) Identification method and device, wearable device and storage medium
CN109040462A (en) Stroke reminding method, apparatus, storage medium and wearable device
CN109068126B (en) Video playing method and device, storage medium and wearable device
JP7207468B2 (en) Output control device, output control method and program
KR20150137453A (en) Mobile device and control method using brainwave
US20200367789A1 (en) Wearable computing apparatus with movement sensors and methods therefor
CN109831817A (en) Terminal control method, device, terminal and storage medium
CN109257490A (en) Audio-frequency processing method, device, wearable device and storage medium
CN109240498B (en) Interaction method and device, wearable device and storage medium
CN109101101B (en) Wearable device control method and device, storage medium and wearable device
CN109144263A (en) Social householder method, device, storage medium and wearable device
EP3611612A1 (en) Determining a user input
CN109361727B (en) Information sharing method and device, storage medium and wearable device
CN109145010B (en) Information query method and device, storage medium and wearable device
US11726551B1 (en) Presenting content based on activity
US11771618B2 (en) Adaptive speech and biofeedback control of sexual stimulation devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant