CN113535111A - Audio playing control method, device, equipment and storage medium - Google Patents

Audio playing control method, device, equipment and storage medium Download PDF

Info

Publication number
CN113535111A
CN113535111A CN202011361319.9A CN202011361319A CN113535111A CN 113535111 A CN113535111 A CN 113535111A CN 202011361319 A CN202011361319 A CN 202011361319A CN 113535111 A CN113535111 A CN 113535111A
Authority
CN
China
Prior art keywords
driver
audio
vehicle
driving
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011361319.9A
Other languages
Chinese (zh)
Inventor
何珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011361319.9A priority Critical patent/CN113535111A/en
Publication of CN113535111A publication Critical patent/CN113535111A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods

Abstract

The embodiment of the application provides a method, a device, equipment and a storage medium for controlling audio playing, which relate to the technical field of computers, and the method comprises the following steps: the method comprises the steps of obtaining the driving speed and the driving road condition of a vehicle, and then determining the emotional attribute of a driver according to the driving speed and the driving road condition. And obtaining a target audio set for adjusting the emotional attribute of the driver from each audio set, wherein each audio set is divided according to the attribute information of the audio. And then playing the audio in the target audio set. The emotion attribute of the driver is determined based on the driving speed and the driving road condition of the vehicle, then the corresponding target audio set is obtained from each audio set to adjust the emotion attribute of the driver, and the driving of the driver is assisted through the audio, so that the driving safety is improved on one hand, and the experience of listening to the audio of the driver is improved on the other hand.

Description

Audio playing control method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for controlling audio playing.
Background
With the rapid development of the internet of vehicles, more and more vehicle enterprises integrate a music player in a central control screen of an automobile. The driver can listen to the song as a leisure mode, and can relieve driving fatigue through music.
The song list in the related music application is preset, and the songs are relatively fixed, so that the songs which are not suitable for the current driving state of the driver can be played sometimes, and the experience is influenced.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, control equipment and a storage medium for audio playing, which are used for assisting driving through audio and improving the experience of listening to the audio by a driver.
In one aspect, an embodiment of the present application provides a method for controlling audio playing, where the method includes:
acquiring the running speed and the running road condition of a vehicle;
determining the emotional attribute of the driver according to the driving speed and the driving road condition;
obtaining a target audio set for adjusting the emotional attribute of the driver from each audio set, wherein each audio set is divided according to the attribute information of the audio;
and playing the audio in the target audio set.
In one aspect, an embodiment of the present application provides an apparatus for controlling audio playing, where the apparatus includes:
the acquisition module is used for acquiring the driving speed and the driving road condition of the vehicle;
the identification module is used for determining the emotional attribute of the driver according to the driving speed and the driving road condition;
the matching module is used for obtaining a target audio set for adjusting the emotional attribute of the driver from each audio set, and each audio set is divided according to the attribute information of the audio;
and the playing module is used for playing the audio in the target audio set.
Optionally, the identification module is specifically configured to:
acquiring human body sign information of a driver;
and determining the emotional attribute of the driver according to the human body sign information of the driver, the driving speed and the driving road condition.
Optionally, the identification module is specifically configured to:
respectively carrying out quantitative processing on the human body sign information, the driving speed and the driving road condition of the driver to obtain influence factors respectively corresponding to the human body sign information, the driving speed and the driving road condition of the driver;
and determining the emotional attribute of the driver according to the influence factors corresponding to the human body sign information of the driver, the driving speed and the driving road condition.
Optionally, the identification module is specifically configured to:
acquiring a target human body sign interval matched with the human body sign information of the driver from each human body sign interval, and taking an influence factor corresponding to the target human body sign interval as an influence factor corresponding to the human body sign information of the driver;
acquiring a target speed interval matched with the running speed from each speed interval, and taking an influence factor corresponding to the target speed interval as an influence factor corresponding to the running speed;
and acquiring a target road condition grade matched with the driving road condition from each road condition grade, and taking an influence factor corresponding to the target road condition grade as an influence factor corresponding to the driving road condition.
Optionally, the obtaining module is specifically configured to:
acquiring the position information of the vehicle by adopting a positioning module;
acquiring time information of a vehicle by adopting a timing module;
and obtaining the running speed of the vehicle according to the position information of the vehicle and the time information of the vehicle.
Optionally, the driving road condition comprises a level state of a driving road surface and a congestion state of a driving road section;
the acquisition module is specifically configured to:
obtaining the leveling state of the running road surface by adopting a road surface leveling detection module;
and acquiring the congestion state of the driving road section from a road condition server.
Optionally, the respective audio sets are divided according to the audio signal frequency corresponding to the audio.
Optionally, the playing module is specifically configured to:
and sequentially playing the audios in the target audio set according to the arrangement sequence of the audios in the target audio set, wherein the arrangement sequence of the audios is determined according to the order of the scores of the audios from high to low, and the score of the audios is determined according to the influence degree of the audios on the emotional attribute of the driver.
Optionally, the playing module is further configured to:
and when the driving speed or the driving road condition of the vehicle meets the preset condition, playing safety prompt voice.
In one aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for controlling audio playing when executing the program.
In one aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program executable by a computer device, and when the program runs on the computer device, the computer device is caused to execute the steps of the above-mentioned control method for audio playing.
In the embodiment of the application, the emotion attribute of the driver is determined based on the driving speed and the driving road condition of the vehicle, then the corresponding target audio sets are obtained from the audio sets to adjust the emotion attribute of the driver, and the driving of the driver is assisted through the audio, so that the driving safety is improved on one hand, and the experience of listening to the audio of the driver is improved on the other hand.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a method for controlling audio playing according to an embodiment of the present disclosure;
FIG. 3 is a system architecture diagram according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a method for controlling audio playing according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart illustrating a method for controlling audio playing according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a control device for audio playing according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following is a description of the design concept of the embodiments of the present application.
In the related music application installed on the vehicle-mounted equipment, a song list is generally preset, songs are fixed, but the driving state of a driver can change along with road conditions, driving speed and other factors, so that sometimes music played by the music application is not suitable for the current driving state of the driver, and experience is influenced.
Through analysis, the negative music with high excitement and fast rhythm is easier to bring high-speed running, while the positive music with low excitement and slow rhythm is more likely to give the driver various speed selections, and is more beneficial to controlling the speed of the vehicle. In addition, when the driver is in a negative emotion of tension and high driving pressure, the music with a gentle and gentle rhythm is played, so that the driver can be helped to make a quicker response when the driver deals with the problem. When the driver is in a relaxed, positive mood with less driving stress, playing rhythmic music can maintain the positive mood of the driver. If different types of music are recommended according to different driving states of the driver, the music-assisted driving can be realized, so that effective music adding is provided for safe driving, and the experience of listening to the music of the driver is improved.
In view of this, an embodiment of the present application provides a method for controlling audio playing, where the method specifically includes: the method comprises the steps of obtaining the driving speed and the driving road condition of a vehicle, and then determining the emotional attribute of a driver according to the driving speed and the driving road condition. And obtaining a target audio set for adjusting the emotional attribute of the driver from each audio set, wherein each audio set is divided according to the attribute information of the audio. And then playing the audio in the target audio set.
In the embodiment of the application, the emotion attribute of the driver is determined based on the driving speed and the driving road condition of the vehicle, then the corresponding target audio sets are obtained from the audio sets to adjust the emotion attribute of the driver, and the driving of the driver is assisted through the audio, so that the driving safety is improved on one hand, and the experience of listening to the audio of the driver is improved on the other hand.
Referring to fig. 1, a system architecture diagram applicable to the embodiment of the present application is shown, where the system architecture includes at least a vehicle-mounted terminal device 101 and an application server 102.
The in-vehicle terminal apparatus 101 is an apparatus embedded in a vehicle, and the in-vehicle terminal apparatus 101 installs an audio application in advance. The audio application may be a pre-installed client application, a web page version application, an applet, or the like. The type of audio application may be a vehicle music application, a vehicle broadcast application, a vehicle voice live application, etc. The control method for audio playing in the embodiment of the application can be applied to various audio applications and used for audio personalized recommendation. For example, the control method for audio playing in the embodiment of the present application may be applied to a vehicle music application, and used for background music recommendation of the vehicle music application or music recommendation of a personalized radio station in the vehicle music application.
The in-vehicle terminal apparatus 101 may include one or more processors 1011, a memory 1012, an I/O interface 1013 interacting with the application server 102, and a display panel 1014, and the like. The vehicle-mounted terminal device 101 may be, but is not limited to, a navigation device, an automatic driving device, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, and the like.
The application server 102 is a background server corresponding to the audio application and provides a service for the audio application. The application server 102 may include one or more processors 1021, memory 1022, and an I/O interface 1023 that interacts with the in-vehicle terminal apparatus 101, and the like. In addition, application server 102 may also configure database 1024. The application server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The vehicle-mounted terminal device 101 and the application server 102 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The control method of audio playing may be executed by the vehicle-mounted terminal device 101, or may be executed by the vehicle-mounted terminal device 101 interacting with the application server 102.
In the first case, the control method of audio playback is executed by the in-vehicle terminal apparatus 101.
The vehicle-mounted terminal device 101 acquires the driving speed and the driving road condition of the vehicle, and then determines the emotional attribute of the driver according to the driving speed and the driving road condition. And then obtaining a target audio set for adjusting the emotional attribute of the driver from each locally stored audio set, wherein each audio set is divided according to the attribute information of the audio. And then playing the audio in the target audio set.
In the second case, the control method of audio playback is interactively performed by the in-vehicle terminal apparatus 101 and the application server 102.
The in-vehicle terminal apparatus 101 acquires the traveling speed and the traveling road condition of the vehicle, and then transmits the traveling speed and the traveling road condition of the vehicle to the application server 102. The application server 102 determines the emotional attribute of the driver according to the driving speed and the driving road condition of the vehicle. Then, from the respective audio sets divided according to the attribute information of the audio, a target audio set for adjusting the emotional attribute of the driver is obtained. And then sends the target audio set to the in-vehicle terminal apparatus 101. The in-vehicle terminal apparatus 101 plays the audio in the target audio set.
In the third case, the control method of audio playback is interactively performed by the in-vehicle terminal apparatus 101 and the application server 102.
The vehicle-mounted terminal device 101 acquires the traveling speed and the traveling road condition of the vehicle, and then determines the emotional attribute of the driver according to the traveling speed and the traveling road condition of the vehicle. The in-vehicle terminal apparatus 101 transmits the emotion attribute of the driver to the application server 102. The application server 102 obtains a target audio set for adjusting the emotional attribute of the driver from each audio set divided according to the attribute information of the audio. The application server 102 transmits the target audio set to the in-vehicle terminal apparatus 101. The in-vehicle terminal apparatus 101 plays the audio in the target audio set.
Based on the system architecture diagram shown in fig. 1, an embodiment of the present application provides a flow of a control method for audio playing, as shown in fig. 2, the flow of the method is executed by a computer device, where the computer device may be an in-vehicle terminal device 101 or an application server 102, and includes the following steps:
step S201, a driving speed and a driving road condition of the vehicle are obtained.
Specifically, the running speed of the vehicle may be a real-time running speed of the vehicle, or may be an average speed of the vehicle over a period of time. The driving road condition includes a level state of a driving road surface and a congestion state of a driving road section.
And step S202, determining the emotional attribute of the driver according to the driving speed and the driving road condition.
Specifically, the emotional attributes include at least emotional types and different degrees corresponding to each emotional type, for example, the emotional types include stress, relaxation, happy, sad, and the like, and for the stress, the emotional types include different degrees of stress, tenser, and the like.
And presetting a mapping relation between the driving speed, the driving road condition and the emotion attribute. And inquiring a preset mapping relation after acquiring the current driving speed and the current driving road condition of the vehicle, and determining the emotional attribute of the driver.
Step S203, obtaining a target audio set for adjusting the emotional attribute of the driver from the respective audio sets.
Specifically, the audio may be music, radio, live voice, or the like. Each audio set is divided according to the attribute information of the audio, and the attribute information of the audio includes the type of the audio, the name of the audio, the frequency of the audio signal corresponding to the audio, and the like.
And presetting a mapping relation between the audio sets and the emotion attributes, wherein each audio set is used for adjusting the corresponding emotion attribute. When the emotion attribute of a driver is acquired in the driving process of a vehicle, a preset mapping relation is inquired based on the emotion attribute of the driver, and a target audio set for adjusting the emotion attribute of the driver is determined.
Step S204, playing the audio in the target audio set.
Specifically, all the audio in the target audio set may be played, or part of the audio in the target audio set may be played. In addition, when the audio in the target audio set is played, a part of the segments in the audio may be played, or the entire audio may be played, which is not specifically limited in this application.
In the embodiment of the application, the emotion attribute of the driver is determined based on the driving speed and the driving road condition of the vehicle, then the corresponding target audio sets are obtained from the audio sets to adjust the emotion attribute of the driver, and the driving of the driver is assisted through the audio, so that the driving safety is improved on one hand, and the experience of listening to the audio of the driver is improved on the other hand.
Alternatively, in the step S201, the position information of the vehicle is acquired by using the positioning module, the time information of the vehicle is acquired by using the timing module, and then the running speed of the vehicle is acquired according to the position information of the vehicle and the time information of the vehicle.
Specifically, the positioning module may be a vehicle-mounted Global Positioning System (GPS), a vehicle-mounted BeiDou Navigation Satellite System (BDS), or the like. The positioning module may be a module inside the vehicle-mounted terminal device, or may be a device independent of the vehicle-mounted terminal device. When the positioning module is a device independent of the vehicle-mounted terminal device, the positioning module is bound with the vehicle-mounted terminal device in advance, and when the vehicle is started, the positioning module is started correspondingly. The positioning module and the vehicle-mounted terminal equipment carry out data communication through communication protocols such as a Bluetooth protocol.
The timing module may be a vehicle-mounted timer, and the timing module may be a module inside the vehicle-mounted terminal device, or may be a device independent of the vehicle-mounted terminal device. When the timing module is a device independent of the vehicle-mounted terminal device, the timing module is bound with the vehicle-mounted terminal device in advance, when the vehicle is started, the timing module is correspondingly started, and the timing module and the vehicle-mounted terminal device carry out data communication through communication protocols such as a Bluetooth protocol.
The vehicle-mounted terminal equipment determines the running distance of the vehicle according to the position information obtained by the two positioning, and then obtains the running speed of the vehicle according to the running distance of the vehicle and the interval duration between the two positioning.
Optionally, in step S201, the driving road condition includes a level state of the driving road surface and a congestion state of the driving road section, the level state of the driving road surface is obtained by using the road surface level detection module, and the congestion state of the driving road section is obtained from the road condition server.
In particular, the flatness detection module may be a vehicle mounted bump accumulator. The vehicle-mounted bump accumulation instrument comprises a counting circuit which counts on the basis of a contact type microswitch and an electromagnetic counter. And when the continuous counting times within the preset time length exceed a preset threshold value, judging that the running road surface is not flat, otherwise, judging that the running road surface is flat.
Or a plurality of counting threshold intervals can be preset, each counting threshold interval corresponds to one leveling state, then a target counting threshold interval matched with the continuous counting times within the preset time length is determined from each counting threshold interval, and the leveling state corresponding to the target counting threshold interval is used as the leveling state of the running road surface of the vehicle.
Illustratively, 4 leveling states are preset, namely, leveling road surface, bump road surface, muddy bump road surface and severe uphill and downhill road surface, wherein the counting threshold interval corresponding to the leveling road surface is 1-50 times, the counting threshold interval corresponding to the bump road surface is 50-100 times, the counting threshold interval corresponding to the muddy bump road surface is 100-150 times, and the counting threshold interval corresponding to the severe uphill and downhill road surface is 150-200 times. And if the continuous counting times of the vehicle-mounted bump accumulation instrument in the preset time period are 80 times, determining that the flat state of the running road of the vehicle is the road bump.
The road flatness detection module can be a module in the vehicle-mounted terminal equipment, and can also be equipment independent of the vehicle-mounted terminal equipment. When the road flatness detection module is a device independent of the vehicle-mounted terminal device, the road flatness detection module is bound with the vehicle-mounted terminal device in advance, when a vehicle is started, the road flatness detection module is correspondingly started, and the road flatness detection module and the vehicle-mounted terminal device carry out data communication through communication protocols such as a Bluetooth protocol.
And the road condition server stores the real-time congestion states of the road sections at all positions. The vehicle-mounted terminal equipment sends a congestion state acquisition request to the road condition server, wherein the congestion state acquisition request comprises the current position of the vehicle. The road condition server determines a driving road section of the vehicle based on the current position of the vehicle, acquires the congestion state of the driving road section of the vehicle, and then sends the congestion state of the driving road section to the vehicle-mounted terminal device.
Illustratively, the congestion states of the road sections are divided into 4 congestion states, namely, clear, relatively congested, congested and very congested. The road condition server stores congestion states corresponding to the road section A, the road section B, the road section C and the road section D respectively, wherein the congestion state of the road section A is congestion, the congestion state of the road section B is relatively congestion, the congestion state of the road section C is relatively congestion, and the congestion state of the road section D is unobstructed.
The method comprises the steps that the vehicle-mounted terminal equipment obtains the current position of a vehicle, then a congestion state obtaining request carrying the current position of the vehicle is sent to a road condition server, the road condition server determines that a running road section of the vehicle is a road section A according to the current position of the vehicle, and the congestion state of the road section A is sent to the vehicle-mounted terminal equipment, namely the vehicle-mounted terminal equipment is informed that the current road section of the vehicle is congested.
Optionally, in step S202, the present application provides at least the following embodiments to determine the emotional attribute of the driver:
according to the first embodiment, the emotional attribute of the driver is determined according to the driving speed and the driving road condition.
Specifically, the driving speed and the driving road condition are respectively quantized to obtain the influence factors corresponding to the driving speed and the driving road condition, and then the emotion attribute of the driver is determined according to the influence factors corresponding to the driving speed and the driving road condition.
When the running speed is quantized, a target speed interval matched with the running speed is acquired from each speed interval, and then the influence factor corresponding to the target speed interval is used as the influence factor corresponding to the running speed.
In specific implementation, the highest hourly speed of different scenes in the national vehicle driving license is taken as a basis in advance, the running speed is divided into different speed intervals, and an influence factor corresponding to each speed interval is set. During specific setting, the influence factor corresponding to each speed interval can be set on the basis of the principle that the larger the running speed is, the larger the corresponding influence factor is; the influence factor corresponding to each speed interval may also be set based on the principle that the larger the driving speed is, the smaller the corresponding influence factor is, and thus, the present application is not particularly limited.
After the running speed of the vehicle is obtained, the running speed of the vehicle is compared with each speed interval, a target speed interval matched with the running speed of the vehicle is determined, and the influence factor corresponding to the target speed interval is used as the influence factor corresponding to the running speed of the vehicle. In addition, when the running speed of the vehicle meets the preset condition, the safety prompt voice is played. For example, when the running speed of the vehicle is higher than the preset safe speed, the overspeed reminding voice is played.
The driving road condition includes a level state of a driving road surface and a congestion state of a driving road section, and accordingly, the road condition level includes a level degree level and a congestion level. When the driving road condition is quantized, a target flatness grade matched with the flatness state of the driving road surface is obtained from each flatness grade, and then the influence factor corresponding to the target flatness grade is used as the influence factor corresponding to the flatness state of the driving road surface. And acquiring a target congestion level matched with the congestion state of the running road section from each congestion level, and then taking the influence factor corresponding to the target congestion level as the influence factor corresponding to the congestion state of the running road section.
In specific implementation, a plurality of flatness levels can be set according to the flatness of the running road surface, and then the influence factor corresponding to each flatness level is set. In the specific setting, the influence factor corresponding to each flatness grade can be obtained based on the principle that the smoother the running road surface is and the larger the corresponding influence factor is; the influence factor corresponding to each flatness grade can also be obtained based on the principle that the more uneven the running road surface is, the larger the corresponding influence factor is, and the application is not particularly limited.
And setting a plurality of congestion levels according to the congestion degree of the running road section, and then setting an influence factor corresponding to each congestion level. During specific setting, the influence factor corresponding to each congestion level can be obtained on the basis of the principle that the congestion of a running road section is larger and the corresponding influence factor is larger; the influence factor corresponding to each congestion level can be obtained based on the principle that the more unobstructed the running road section is and the larger the corresponding influence factor is, and therefore the method and the device are not specifically limited.
And after the leveling state of the running road surface is obtained, comparing the leveling state of the running road surface with each leveling degree grade, determining a target leveling degree grade matched with the leveling state of the running road surface, and taking an influence factor corresponding to the target leveling degree grade as an influence factor corresponding to the leveling state of the running road surface.
After the congestion state of the running road section is obtained, the congestion state of the running road section is compared with each congestion level, a target congestion level matched with the congestion state of the running road section is determined, and the influence factor corresponding to the target congestion level is used as the influence factor corresponding to the congestion state of the running road section.
In addition, when the driving road condition of the vehicle meets the preset condition, the safety prompt voice is played. For example, when the target flatness level corresponding to the driving road surface is greater than the preset flatness level, the prompt voice of the unevenness of the driving road surface is played. And when the target congestion level corresponding to the running road section is greater than the preset congestion level, playing a prompt voice of the congestion of the running road section.
Further, according to weights respectively corresponding to the running speed of the vehicle, the leveling state of the running road surface and the congestion state of the running road section, weighting and summing up the influence factors respectively corresponding to the running speed of the vehicle, the leveling state of the running road surface and the congestion state of the running road section, and obtaining the emotional influence value of the driver. And then, based on the emotion influence value of the driver, inquiring a preset emotion attribute comparison table to determine the emotion attribute of the driver.
Illustratively, as shown in table 1, the traveling speed is divided into 5 speed intervals, which are respectively a speed interval 1(0 to 30 km/h), a speed interval 2(30 to 40 km/h), a speed interval 3(40 to 70 km/h), a speed interval 4(70 to 120 km/h), and a speed interval 5(120 km/h or more). The influence factor corresponding to the speed interval 1 is 1, the influence factor corresponding to the speed interval 2 is 2, the influence factor corresponding to the speed interval 3 is 3, the influence factor corresponding to the speed interval 4 is 4, and the influence factor corresponding to the speed interval 5 is 5.
Table 1.
Figure BDA0002804044270000121
The method comprises the steps of presetting 4 flatness grades, as shown in table 2, wherein the flatness grades are a flatness grade 1 (road surface flatness), a flatness grade 2 (road surface bump), a flatness grade 3 (road surface mud bump) and a flatness grade 4 (severe uphill and downhill), wherein an influence factor corresponding to the flatness grade 1 is 1, an influence factor corresponding to the flatness grade 2 is 3, an influence factor corresponding to the flatness grade 3 is 5, and an influence factor corresponding to the flatness grade 4 is 7.
Table 2.
Figure BDA0002804044270000122
As shown in table 3, the congestion levels are congestion level 1 (clear), congestion level 2 (relatively congested), congestion level 3 (congested), and congestion level 4 (very congested), where an influence factor corresponding to congestion level 1 is 2, an influence factor corresponding to congestion level 2 is 4, an influence factor corresponding to congestion level 3 is 6, and an influence factor corresponding to congestion level 4 is 8.
Table 3.
Figure BDA0002804044270000131
If the running speed of the vehicle is 35 km/h, the target speed interval matched with the running speed is determined to be a speed interval 2 (30-40 km/h) through the lookup table 1, and the corresponding influence factor is 2. If the level state of the driving road surface of the vehicle is the level road surface, the target level matched with the level state of the driving road surface can be determined to be the level 1 through the lookup table 2, and the corresponding influence factor is 1. If the congestion state of the running road section of the vehicle is unobstructed, the target congestion level matched with the congestion state of the running road section can be determined to be the congestion level 1 through the lookup table 3, and the corresponding influence factor is 2.
Setting the weight corresponding to the running speed of the vehicle to be 0.6, the weight corresponding to the level state of the running road to be 0.2 and the weight corresponding to the congestion state of the running road section to be 0.2, then weighting and summing the influence factors respectively corresponding to the running speed of the vehicle, the level state of the running road and the congestion state of the running road section according to the preset weights, and obtaining the emotional influence value of the driver M as follows:
0.6×2+0.2×1+0.2×2=1.8。
the emotion attribute comparison table is set as shown in table 4:
table 4.
Emotional impact value 0~2 2~4 4~6
Emotional attributes The driving pressure is low The driving pressure is low High driving pressure
In table 4, the greater the emotion influence value, the more biased the emotion attribute toward the negative emotion attribute, and the smaller the emotion influence value, the more biased the emotion attribute toward the positive emotion attribute. Based on the emotion influence value of the driver M, the emotion attribute comparison table shown in table 4 is looked up, and it is determined that the emotion attribute of the driver M is low in driving stress.
And secondly, obtaining the human body sign information of the driver, and determining the emotional attribute of the driver according to the human body sign information, the driving speed and the driving road condition of the driver.
Specifically, the human body sign information, the driving speed and the driving road condition of the driver are respectively quantized to obtain the human body sign information, the driving speed and the influence factors corresponding to the driving road condition of the driver. And then determining the emotional attribute of the driver according to the influence factors corresponding to the human body sign information, the driving speed and the driving road condition of the driver respectively.
When the human body sign information of the driver is subjected to quantization processing, a target human body sign interval matched with the human body sign information of the driver can be obtained from each human body sign interval, and an influence factor corresponding to the target human body sign interval is used as an influence factor corresponding to the human body sign information of the driver.
In specific implementation, the human body sign information includes indexes such as pulse and blood pressure, and the human body sign information can be acquired through a sign detection module embedded in the vehicle. For example, a heart rate sensing device is embedded in a steering wheel of a vehicle, and when a hand of a driver contacts the steering wheel, the heart rate sensing device detects and obtains human body sign information of the driver.
Human sign information also can be obtained through the sign check out test set that is independent of the vehicle, and sign check out test set can be all kinds of wearing formula equipment, for example, intelligent wrist-watch, intelligent bracelet etc.. In specific implementation, the sign detection device is bound with the vehicle-mounted terminal device in advance. The physical sign detection equipment detects the human body physical sign information of the driver in real time and sends the human body physical sign information of the driver to the vehicle-mounted terminal equipment.
When the physical sign information of the human body is different, the corresponding states of the human body are also different. For example, if the pulse beat of the human body is fast and the blood pressure is high, the human body may be in a tense state, and if the pulse beat of the human body is at a normal level and the blood pressure is at a normal level, the human body may be in a relaxed state. Based on this, in the embodiment of the application, different human body sign intervals are pre-divided, each human body sign interval corresponds to one pulse beating range and one blood pressure range, and then the influence factor corresponding to each human body sign interval is set. During specific setting, the influence factor corresponding to each human body sign interval can be obtained based on the principle that the higher each index in the human body sign information is, the larger the corresponding influence factor is; the influence factor corresponding to each human body sign interval can also be obtained based on the principle that the corresponding influence factor is larger when each index in the human body sign information is lower, and thus, the application is not particularly limited.
After the human body sign information of the driver is obtained, comparing the human body sign information of the driver with each human body sign interval, determining a target human body sign interval matched with the human body sign information of the driver, and taking an influence factor corresponding to the target human body sign interval as an influence factor corresponding to the human body sign information of the driver.
The specific process of performing quantization processing on the driving speed and the driving road condition to obtain the influence factors corresponding to the driving speed and the driving road condition respectively is described in detail in the first embodiment, and is not described herein again.
Further, according to the weights respectively corresponding to the human body sign information of the driver, the running speed of the vehicle, the leveling state of the running road and the congestion state of the running road section, weighting and summing influence factors respectively corresponding to the human body sign information of the driver, the running speed of the vehicle, the leveling state of the running road and the congestion state of the running road section to obtain the emotional influence value of the driver. And then, based on the emotion influence value of the driver, inquiring a preset emotion attribute comparison table to determine the emotion attribute of the driver.
Exemplarily, as shown in table 5, different 3 human body sign intervals are divided based on the human body sign information, which are a human body sign interval 1 (relaxed), a human body sign interval 2 (tenser), and a human body sign interval 3 (tenser). The pulse beating range corresponding to the human body physical sign interval 1 is [ a, b ], the pulse beating range corresponding to the human body physical sign interval 2 is [ b, c ], the pulse beating range corresponding to the human body physical sign interval 3 is [ c, d ], wherein a, b, c and d are natural numbers, and a < b < c < d. The blood pressure range corresponding to the human body physical sign interval 1 is [ e, f ], the blood pressure range corresponding to the human body physical sign interval 2 is [ f, g ], the blood pressure range corresponding to the human body physical sign interval 3 is [ g, h ], wherein e, f, g and h are natural numbers, and e < f < g < h. The influence factor corresponding to the human body sign interval 1 is 2, the influence factor corresponding to the human body sign interval 2 is 3, and the influence factor corresponding to the human body sign interval 3 is 4.
Table 5.
Figure BDA0002804044270000151
As shown in table 1, the running speed is divided into 5 speed intervals, which are a speed interval 1(0 to 30 km/h), a speed interval 2(30 to 40 km/h), a speed interval 3(40 to 70 km/h), a speed interval 4(70 to 120 km/h), and a speed interval 5(120 km/h or more). The influence factor corresponding to the speed interval 1 is 1, the influence factor corresponding to the speed interval 2 is 2, the influence factor corresponding to the speed interval 3 is 3, the influence factor corresponding to the speed interval 4 is 4, and the influence factor corresponding to the speed interval 5 is 5.
The method comprises the steps of presetting 4 flatness grades, as shown in table 2, wherein the flatness grades are a flatness grade 1 (road surface flatness), a flatness grade 2 (road surface bump), a flatness grade 3 (road surface mud bump) and a flatness grade 4 (severe uphill and downhill), wherein an influence factor corresponding to the flatness grade 1 is 1, an influence factor corresponding to the flatness grade 2 is 3, an influence factor corresponding to the flatness grade 3 is 5, and an influence factor corresponding to the flatness grade 4 is 7.
As shown in table 3, the congestion levels are congestion level 1 (clear), congestion level 2 (relatively congested), congestion level 3 (congested), and congestion level 4 (very congested), where an influence factor corresponding to congestion level 1 is 2, an influence factor corresponding to congestion level 2 is 4, an influence factor corresponding to congestion level 3 is 6, and an influence factor corresponding to congestion level 4 is 8.
If the running speed of the vehicle is 100 kilometers per hour, the target speed interval matched with the running speed is determined to be a speed interval 4 (70-120 kilometers per hour) through the lookup table 1, and the corresponding influence factor is 4. If the level state of the driving road surface of the vehicle is the level road surface, the target level matched with the level state of the driving road surface can be determined to be the level 1 through the lookup table 2, and the corresponding influence factor is 1. If the congestion state of the running road section of the vehicle is unobstructed, the target congestion level matched with the congestion state of the running road section can be determined to be the congestion level 1 through the lookup table 3, and the corresponding influence factor is 2. If the beat range of the driver W is located in the interval [ c, d ] and the blood pressure of the driver W is located in the interval [ g, h ], it can be determined through the lookup table 5 that the target human body physical sign interval matched with the human body physical sign information of the driver W is the human body physical sign interval 3, and the corresponding influence factor is 4.
Setting the weight corresponding to the human body sign information of the driver W as 0.2, the weight corresponding to the running speed of the vehicle as 0.6, the weight corresponding to the smooth state of the running road as 0.1, and the weight corresponding to the congestion state of the running road section as 0.1, weighting and summing the influence factors respectively corresponding to the human body sign information of the driver, the running speed of the vehicle, the smooth state of the running road and the congestion state of the running road section according to the preset weights, and obtaining the emotional influence value of the driver W as follows:
0.2×4+0.6×4+0.1×1+0.1×2=3.5。
the emotion attribute comparison table is set as shown in table 6:
table 6.
Emotional impact value 0~2 2~3 3~4 4~6
Emotional attributes The driving pressure is low The driving pressure is low The driving pressure is larger High driving pressure
In table 6, the greater the emotion influence value, the more biased the emotion attribute toward the negative emotion attribute, and the smaller the emotion influence value, the more biased the emotion attribute toward the positive emotion attribute. Based on the emotion influence value of the driver W, the emotion attribute comparison table shown in table 6 is looked up, and it is determined that the emotion attribute of the driver W is driving stress.
In the embodiment of the application, when the emotion attribute of the driver is determined, human body sign information of the dimensionality of the driver is obtained, the running speed and the running road condition of the dimensionality of a vehicle are obtained at the same time, then the characteristic information of the multiple dimensionality is integrated, and the emotion attribute of the driver is determined, so that the accuracy of determining the emotion attribute of the driver is improved, and the accuracy of subsequently recommending audio for the driver based on the emotion attribute is improved.
It should be noted that, in the embodiments of the present application, the implementation of determining the emotional attribute of the driver is not limited to the above-mentioned several implementations, and the emotional attribute of the driver may be determined based on any one of the four dimensional features, that is, the human body physical sign information of the driver, the driving speed of the vehicle, the level state of the driving road surface, and the congestion state of the driving road section, or a combination of the features of multiple dimensions, and this application is not limited specifically.
Alternatively, in step S203, each audio set is divided according to the audio signal frequency corresponding to the audio.
Specifically, the audio in the audio library is first converted into an audio signal, i.e., Pulse Code Modulation (PCM) data, and then frequency distribution in the audio signal is obtained through fast fourier transform. The audio in the audio library is divided into a plurality of audio sets by comparing the frequency distributions of different audios. For any audio, the frequency of the audio signal corresponding to the audio may be the average frequency of the audio signal corresponding to the entire audio, or may be the average frequency of the audio signal corresponding to the audio segment. And then dividing the audio in the audio library into a plurality of audio sets according to the frequency of the audio signal.
Illustratively, the songs in the vehicle-mounted music library are obtained first, then the songs in the vehicle-mounted music library are converted into audio signals, frequency distribution in the audio signals is obtained through fast Fourier transform, then the frequency average value of the audio signals corresponding to the whole song is obtained according to the frequency distribution of the audio signals corresponding to the whole song, and the frequency average value of the audio signals corresponding to the whole song is used as the frequency of the audio signals corresponding to the song.
The songs in the vehicle-mounted music library are divided into 4 song sets according to the audio signal frequency, and as shown in table 7, the 4 song sets are respectively a song set 1, a song set 2, a song set 3 and a song set 4, wherein the audio signal frequency range corresponding to the song set 1 is 100 Hz-200 Hz, the audio signal frequency range corresponding to the song set 2 is 200 Hz-300 Hz, the audio signal frequency range corresponding to the song set 3 is 300 Hz-400 Hz, and the audio signal frequency range corresponding to the song set 4 is 400 Hz-500 Hz.
Table 7.
Song collection Frequency range of audio signal
Song set 1 100Hz~200Hz
Song set 2 200Hz~300Hz 3
Song set 3 300Hz~400Hz
Song collection 4 400Hz~500Hz
Because the speed of the audio rhythm is embodied in the audio signal frequency corresponding to the audio, when the audio in the audio library is divided into a plurality of audio sets based on the audio signal frequency, the audio with different rhythms can be effectively divided, and the follow-up accurate music recommendation for the driver is facilitated.
It should be noted that, in the embodiment of the present application, the audio set is not limited to be divided according to the frequency of the audio information number, and the audio set may also be divided according to the type, name, and other factors corresponding to the audio, which is not specifically limited in this application.
Further, an audio set for adjusting each emotion attribute is preset, and a mapping relation between each emotion attribute and the corresponding audio set is stored. When the driver is in a negative emotion of tension and high driving pressure, the music with gentle and gentle rhythm is played, so that the driver can be helped to make a quicker response when responding to a problem. When the driver is in a relaxed, positive mood with less driving stress, playing rhythmic music can maintain the positive mood of the driver. In view of this, in setting the mapping relationship between the emotion attribute and the audio set, a fast-tempo audio set (i.e., an audio set of high audio signal frequencies) may be assigned to the emotion attribute biased to positive, and a slow-tempo audio set (i.e., an audio set of low audio signal frequencies) may be assigned to the emotion attribute biased to negative. After obtaining the emotional attribute of the driver, the mapping relationship between the emotional attribute and the music set can be queried to obtain a target audio set for adjusting the emotional attribute of the driver.
Illustratively, the mapping relationship between the set emotion attribute and the music collection is shown in table 8:
table 8.
Emotional attributes The driving pressure is low The driving pressure is low The driving pressure is larger High driving pressure
Song collection Song collection 4 Song set 3 Song set 2 Song set 1
In table 8, the audio signal frequency range corresponding to song set 1 is 100Hz to 200Hz, the audio signal frequency range corresponding to song set 2 is 200Hz to 300Hz, the audio signal frequency range corresponding to song set 3 is 300Hz to 400Hz, and the audio signal frequency range corresponding to song set 4 is 400Hz to 500 Hz.
If it is determined that the emotional attribute of the driver W is greater in driving stress, the target music for adjusting the emotional attribute of the driver can be obtained by looking up the mapping relationship between the emotional attribute and the music set shown in table 8 and combined as the song set 2.
In the embodiment of the application, the audio capable of adjusting the emotion attribute is recommended for the driver according to the emotion attribute of the driver, so that the audio-assisted driving can be realized, and meanwhile, the driver can listen to songs for experience.
Optionally, in step S204, the audios in the target audio set are sequentially played according to the arrangement order of the audios in the target audio set. Specifically, the ranking order of the audios is determined according to the order of the scores of the audios from high to low, wherein the scores of the audios can be determined by at least the following embodiments:
in the first embodiment, the score of the audio is determined according to the degree of influence of the audio on the emotional attribute of the driver.
In a specific implementation, after playing an audio, the emotional attribute change of the driver is continuously monitored. When the emotion attribute of the driver changes towards the positive emotion attribute, the audio can improve the emotion attribute of the driver, so that the score of the audio is increased, and the specific increased score can be determined according to the degree of change of the emotion attribute of the driver towards the positive emotion attribute. When the emotion attribute of the driver changes to the negative emotion attribute, the audio deteriorates the emotion attribute of the driver, so that the score of the audio is reduced, and the specific reduced score can be determined according to the degree of change of the emotion attribute of the driver to the negative emotion attribute. When the emotional attribute of the driver is not changed, the score of the audio is not adjusted.
In the second embodiment, the score of the audio may be determined according to the degree of influence of the audio on the human body signs of the driver.
In specific implementation, after one audio is played, the human body sign change of a driver is continuously monitored. When the human body signs of the driver change to the relaxed state, the audio can improve the human body signs of the driver, so that the score of the audio is improved, and the specific improved score can be determined according to the degree of the change of the human body signs of the driver to the relaxed state. When the human body sign of the driver changes to the tension state, the audio deteriorates the human body sign of the driver, so that the score of the audio is reduced, and the specific reduced score can be determined according to the degree of the change of the human body sign of the driver to the tension state. When the human body signs of the driver are not changed, the score of the audio is not adjusted.
In a third embodiment, the score of the audio may be determined according to the degree of influence of the audio on the driving speed of the vehicle.
In particular, the speed of the vehicle is continuously monitored after an audio is played. If the current running speed of the vehicle is greater than the preset safe speed, and after the audio is played, the running speed of the vehicle is reduced, which indicates that the audio can improve the running speed of the vehicle, so that the score of the audio is increased, and the specific increased score can be determined according to the degree of reduction of the running speed of the vehicle. If the running speed of the vehicle is increased, the audio deteriorates the running speed of the vehicle, so that the score of the audio is reduced, and the specific reduced score can be determined according to the degree of increase of the running speed of the vehicle. When the running speed of the vehicle is not changed, the score of the audio is not adjusted.
In the fourth embodiment, the audio score is determined according to the driver's preference for the audio. In specific implementation, the number of times that the driver listens to each audio or the time length for listening to each audio is determined from the audio listening record of the driver, the higher the listening number of the driver or the longer the listening time length of the driver is, the higher the corresponding score is, and the lower the listening number of the driver or the shorter the listening time length of the driver is, the lower the corresponding score is.
It should be noted that, in the embodiment of the present application, the implementation manner of determining the score of the audio is not limited to the above several, and other implementation manners may also be used, for example, the foregoing several implementation manners may be combined arbitrarily to determine the score of the audio, and the present application is not limited specifically.
Optionally, when the audio in the target audio set is played, the driving speed, the driving road condition, and the human body sign information of the driver of the vehicle may be collected again every preset time period, then the emotion attribute of the driver is determined based on the collected driving speed, the collected driving road condition, and the collected human body sign information of the driver, and the target audio set for adjusting the emotion attribute of the driver is obtained. And if the redetermined target audio set is different from the previously determined target audio set, playing the audio in the redetermined target audio set, otherwise, continuing to play the audio in the previously determined target audio set.
When the audio in the target audio set is played, the driving speed, the driving road condition and the human body sign information of the driver of the vehicle can be collected again within a preset time length before each audio is played, then the emotion attribute of the driver is determined based on the collected driving speed, the collected driving road condition and the human body sign information of the driver, and the target audio set for adjusting the emotion attribute of the driver is obtained. And if the redetermined target audio set is different from the previously determined target audio set, playing the audio in the redetermined target audio set, otherwise, continuing to play the audio in the previously determined target audio set.
When different audios are switched, if the previous audio is not played, the previous audio is stopped in a fade-down mode, the next audio is introduced in a fade-up mode, and natural transition of audio switching is achieved in the fade-down and fade-up modes, so that listening experience of a driver is improved.
In order to describe the technical solution of the embodiment of the present application more clearly, a system architecture applicable to the method is shown in fig. 3, and the system architecture of the method includes a vehicle 301, a road condition server 302, and an application server 303, where the vehicle 301 is embedded with a vehicle-mounted terminal device, the vehicle-mounted terminal device is installed with a vehicle-mounted music application, and the application server 303 is a background server corresponding to the vehicle-mounted music application. The vehicle-mounted terminal device is connected with the road condition server 302 and the application server 303 through a wireless network.
Based on the system architecture shown in fig. 3, an embodiment of the present application provides a flow of a control method for audio playing, as shown in fig. 4, including the following steps:
step S401, the vehicle-mounted terminal equipment starts a Bluetooth protocol.
And S402, binding the vehicle-mounted GPS, the vehicle-mounted heart rate sensing equipment and the vehicle-mounted bump accumulation instrument by the vehicle-mounted terminal equipment.
When the vehicle is started, the vehicle-mounted GPS, the vehicle-mounted heart rate sensing equipment and the vehicle-mounted bump accumulation instrument are correspondingly started. The vehicle-mounted GPS collects the position information of the vehicle and sends the position information of the vehicle to the vehicle-mounted terminal equipment; the vehicle-mounted heart rate sensing equipment collects human body sign information of a driver and sends the human body sign information of the driver to the vehicle-mounted terminal equipment; and the vehicle-mounted jolt accumulator sends a counting result in a preset time length to the vehicle-mounted terminal equipment.
In step S403, the vehicle-mounted terminal device determines whether the vehicle is in a driving state, if so, step S404 is executed, otherwise, step S413 is executed.
Step S404, the vehicle-mounted terminal device acquires the running speed of the vehicle, the smooth state of the running road surface and the human body sign information of the driver.
Specifically, the in-vehicle terminal device determines the travel speed of the vehicle based on the position information of the vehicle and the counted time length of the timer. And determining the level state of the running road according to the counting result of the vehicle-mounted bump accumulation instrument.
In step S405, the vehicle-mounted terminal device acquires the congestion state of the traveling road segment from the road condition server.
Step S406, the vehicle-mounted terminal device determines the emotional attribute of the driver according to the human body sign information of the driver, the driving speed of the vehicle, the leveling state of the driving road surface and the congestion state of the driving road section.
In step S407, the in-vehicle terminal device transmits the emotion attribute of the driver to the application server.
In step S408, the application server obtains a target music collection for adjusting the emotional attribute of the driver from the respective music collections.
Step S409, the application server obtains the target music with the highest score from the target music set.
In step S410, the application server transmits the target music to the in-vehicle terminal apparatus.
In step S411, the in-vehicle terminal apparatus plays the target music, and executes step S703.
Step S412, end.
Based on the system architecture shown in fig. 3, an embodiment of the present application provides another flow of a control method for audio playing, as shown in fig. 5, including the following steps:
step S501, the vehicle-mounted terminal equipment starts a Bluetooth protocol.
And step S502, the vehicle-mounted terminal equipment is bound with a vehicle-mounted GPS, a vehicle-mounted heart rate sensing device and a vehicle-mounted bump accumulation instrument.
In step S503, the vehicle-mounted terminal device determines whether the vehicle is in a driving state, if so, step S404 is executed, otherwise, step S413 is executed.
Step S504, the vehicle-mounted terminal device obtains the driving speed of the vehicle, the smooth state of the driving road surface and the human body sign information of the driver.
In step S505, the vehicle-mounted terminal device obtains the congestion state of the traveling road section from the road condition server.
Step S506, the vehicle-mounted terminal device sends the human body sign information of the driver, the running speed of the vehicle, the leveling state of the running road surface and the congestion state of the running road section to the application server.
And step S507, the application server determines the emotional attribute of the driver according to the human body sign information of the driver, the driving speed of the vehicle, the leveling state of the driving road surface and the congestion state of the driving road section.
In step S508, the application server obtains a target music collection for adjusting the emotional attribute of the driver from the respective music collections.
In step S509, the application server obtains the highest-score target music from the target music set.
In step S510, the application server transmits the target music to the in-vehicle terminal apparatus.
In step S511, the in-vehicle terminal apparatus plays the target music, and executes step S703.
And step S512, ending.
In the embodiment of the application, the emotion attribute of the driver is determined based on the driving speed and the driving road condition of the vehicle, then the corresponding target audio sets are obtained from the audio sets to adjust the emotion attribute of the driver, and the driving of the driver is assisted through the audio, so that the driving safety is improved on one hand, and the experience of listening to the audio of the driver is improved on the other hand.
Based on the same technical concept, an embodiment of the present application provides an apparatus for controlling audio playing, as shown in fig. 6, the apparatus 600 includes:
the obtaining module 601 is configured to obtain a driving speed and a driving road condition of a vehicle;
the identification module 602 is configured to determine an emotional attribute of the driver according to the driving speed and the driving road condition;
a matching module 603, configured to obtain a target audio set for adjusting an emotional attribute of the driver from each audio set, where each audio set is divided according to attribute information of an audio;
a playing module 604, configured to play the audio in the target audio set.
Optionally, the identifying module 602 is specifically configured to:
acquiring human body sign information of a driver;
and determining the emotional attribute of the driver according to the human body sign information of the driver, the driving speed and the driving road condition.
Optionally, the identifying module 602 is specifically configured to:
respectively carrying out quantitative processing on the human body sign information, the driving speed and the driving road condition of the driver to obtain influence factors respectively corresponding to the human body sign information, the driving speed and the driving road condition of the driver;
and determining the emotional attribute of the driver according to the influence factors corresponding to the human body sign information of the driver, the driving speed and the driving road condition.
Optionally, the identifying module 602 is specifically configured to:
acquiring a target human body sign interval matched with the human body sign information of the driver from each human body sign interval, and taking an influence factor corresponding to the target human body sign interval as an influence factor corresponding to the human body sign information of the driver;
acquiring a target speed interval matched with the running speed from each speed interval, and taking an influence factor corresponding to the target speed interval as an influence factor corresponding to the running speed;
and acquiring a target road condition grade matched with the driving road condition from each road condition grade, and taking an influence factor corresponding to the target road condition grade as an influence factor corresponding to the driving road condition.
Optionally, the obtaining module 601 is specifically configured to:
acquiring the position information of the vehicle by adopting a positioning module;
acquiring time information of a vehicle by adopting a timing module;
and obtaining the running speed of the vehicle according to the position information of the vehicle and the time information of the vehicle.
Optionally, the driving road condition comprises a level state of a driving road surface and a congestion state of a driving road section;
the obtaining module 601 is specifically configured to:
obtaining the leveling state of the running road surface by adopting a road surface leveling detection module;
and acquiring the congestion state of the driving road section from a road condition server.
Optionally, the respective audio sets are divided according to the audio signal frequency corresponding to the audio.
Optionally, the playing module 604 is specifically configured to:
and sequentially playing the audios in the target audio set according to the arrangement sequence of the audios in the target audio set, wherein the arrangement sequence of the audios is determined according to the order of the scores of the audios from high to low, and the score of the audios is determined according to the influence degree of the audios on the emotional attribute of the driver.
Optionally, the playing module 604 is further configured to:
and when the driving speed or the driving road condition of the vehicle meets the preset condition, playing safety prompt voice.
In the embodiment of the application, the emotion attribute of the driver is determined based on the driving speed and the driving road condition of the vehicle, then the corresponding target audio sets are obtained from the audio sets to adjust the emotion attribute of the driver, and the driving of the driver is assisted through the audio, so that the driving safety is improved on one hand, and the experience of listening to the audio of the driver is improved on the other hand.
Based on the same technical concept, the embodiment of the present application provides a computer device, as shown in fig. 7, including at least one processor 701 and a memory 702 connected to the at least one processor, where a specific connection medium between the processor 701 and the memory 702 is not limited in this embodiment, and the processor 701 and the memory 702 are connected through a bus in fig. 7 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In this embodiment, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may execute the steps included in the control method for playing back audio by executing the instructions stored in the memory 702.
The processor 701 is a control center of the computer device, and may be connected to various parts of the computer device by using various interfaces and lines, and perform audio playback control by executing or executing instructions stored in the memory 702 and calling data stored in the memory 702. Optionally, the processor 701 may include one or more processing units, and the processor 701 may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 701. In some embodiments, processor 701 and memory 702 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 701 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, configured to implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 702, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 702 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, which, when the program runs on the computer device, causes the computer device to execute the steps of the above-mentioned control method for audio playback.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (12)

1. A method for controlling audio playback, comprising:
acquiring the running speed and the running road condition of a vehicle;
determining the emotional attribute of the driver according to the driving speed and the driving road condition;
obtaining a target audio set for adjusting the emotional attribute of the driver from each audio set, wherein each audio set is divided according to the attribute information of the audio;
and playing the audio in the target audio set.
2. The method of claim 1, wherein determining an emotional attribute of the driver based on the speed of travel and the road condition of travel comprises:
acquiring human body sign information of a driver;
and determining the emotional attribute of the driver according to the human body sign information of the driver, the driving speed and the driving road condition.
3. The method of claim 2, wherein determining the emotional attribute of the driver according to the human sign information of the driver, the driving speed, and the driving road condition comprises:
respectively carrying out quantitative processing on the human body sign information, the driving speed and the driving road condition of the driver to obtain influence factors respectively corresponding to the human body sign information, the driving speed and the driving road condition of the driver;
and determining the emotional attribute of the driver according to the influence factors corresponding to the human body sign information of the driver, the driving speed and the driving road condition.
4. The method according to claim 3, wherein the quantizing the human body sign information, the driving speed, and the driving road condition of the driver respectively to obtain the influence factors corresponding to the human body sign information, the driving speed, and the driving road condition of the driver respectively comprises:
acquiring a target human body sign interval matched with the human body sign information of the driver from each human body sign interval, and taking an influence factor corresponding to the target human body sign interval as an influence factor corresponding to the human body sign information of the driver;
acquiring a target speed interval matched with the running speed from each speed interval, and taking an influence factor corresponding to the target speed interval as an influence factor corresponding to the running speed;
and acquiring a target road condition grade matched with the driving road condition from each road condition grade, and taking an influence factor corresponding to the target road condition grade as an influence factor corresponding to the driving road condition.
5. The method of claim 1, wherein the obtaining the travel speed of the vehicle comprises:
acquiring the position information of the vehicle by adopting a positioning module;
acquiring time information of a vehicle by adopting a timing module;
and obtaining the running speed of the vehicle according to the position information of the vehicle and the time information of the vehicle.
6. The method of claim 5, wherein the driving road conditions include a level condition of a driving road surface and a congestion condition of a driving road section;
the acquiring of the driving road condition of the vehicle comprises the following steps:
obtaining the leveling state of the running road surface by adopting a road surface leveling detection module;
and acquiring the congestion state of the driving road section from a road condition server.
7. The method of claim 1, wherein the respective sets of audio are divided according to audio signal frequencies to which the audio corresponds.
8. The method of any of claims 1 to 7, wherein the playing the audio in the target audio set comprises:
and sequentially playing the audios in the target audio set according to the arrangement sequence of the audios in the target audio set, wherein the arrangement sequence of the audios is determined according to the order of the scores of the audios from high to low, and the score of the audios is determined according to the influence degree of the audios on the emotional attribute of the driver.
9. The method of claim 8, further comprising:
and when the driving speed or the driving road condition of the vehicle meets the preset condition, playing safety prompt voice.
10. An apparatus for controlling audio playback, comprising:
the acquisition module is used for acquiring the driving speed and the driving road condition of the vehicle;
the identification module is used for determining the emotional attribute of the driver according to the driving speed and the driving road condition;
the matching module is used for obtaining a target audio set for adjusting the emotional attribute of the driver from each audio set, and each audio set is divided according to the attribute information of the audio;
and the playing module is used for playing the audio in the target audio set.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 9 are performed by the processor when the program is executed.
12. A computer-readable storage medium, having stored thereon a computer program executable by a computer device, for causing the computer device to perform the steps of the method of any one of claims 1 to 9, when the program is run on the computer device.
CN202011361319.9A 2020-11-27 2020-11-27 Audio playing control method, device, equipment and storage medium Pending CN113535111A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011361319.9A CN113535111A (en) 2020-11-27 2020-11-27 Audio playing control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011361319.9A CN113535111A (en) 2020-11-27 2020-11-27 Audio playing control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113535111A true CN113535111A (en) 2021-10-22

Family

ID=78094300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011361319.9A Pending CN113535111A (en) 2020-11-27 2020-11-27 Audio playing control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113535111A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885257A (en) * 2022-07-12 2022-08-09 北京远特科技股份有限公司 Audio processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885257A (en) * 2022-07-12 2022-08-09 北京远特科技股份有限公司 Audio processing method and device, electronic equipment and storage medium
CN114885257B (en) * 2022-07-12 2022-11-04 北京远特科技股份有限公司 Audio processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11853645B2 (en) Machine-led mood change
US9965477B2 (en) Methods and devices for determining media files based on activity levels
WO2017000489A1 (en) On-board voice command identification method and apparatus, and storage medium
CN105022777B (en) Vehicle application based on driving behavior is recommended
US20140201276A1 (en) Accumulation of real-time crowd sourced data for inferring metadata about entities
CN104417457A (en) Driver assistance system
CN105761329A (en) Method of identifying driver based on driving habits
CN109849786A (en) Method, system, device and the readable storage medium storing program for executing of music are played based on speed
EP3091761B1 (en) Method and system for providing driving situation based infotainment
CN113535111A (en) Audio playing control method, device, equipment and storage medium
CN112052056A (en) Interaction method and device of vehicle-mounted intelligent assistant, vehicle-mounted equipment and vehicle
WO2021185468A1 (en) Technique for providing a user-adapted service to a user
US11197097B2 (en) Devices, systems and processes for providing adaptive audio environments
CN109508403B (en) Matching method and device for vehicle-mounted music and vehicle-mounted intelligent controller
CN112109715A (en) Method, device, medium and system for generating vehicle power output strategy
CN109342765A (en) Vehicle collision detection method and device
CN115782911B (en) Data processing method and related device for steering wheel hand-off event in driving scene
CN113536028A (en) Music recommendation method and device
CN107343160B (en) Night motorcycle caution system
CN114130038B (en) Amusement vehicle, control method and device thereof, storage medium and terminal
US11874129B2 (en) Apparatus and method for servicing personalized information based on user interest
CN114078487A (en) Music playing method and device, electronic equipment and computer readable storage medium
CN108460057A (en) A kind of user&#39;s stroke method for digging and device based on unsupervised learning
CN106233093B (en) Function candidate&#39;s suggestion device
CN109101548A (en) A kind of multimedia acquisition methods and system based on recommended technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053937

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination