CN114973740B - Method and device for determining voice broadcasting time and electronic equipment - Google Patents

Method and device for determining voice broadcasting time and electronic equipment Download PDF

Info

Publication number
CN114973740B
CN114973740B CN202210633665.0A CN202210633665A CN114973740B CN 114973740 B CN114973740 B CN 114973740B CN 202210633665 A CN202210633665 A CN 202210633665A CN 114973740 B CN114973740 B CN 114973740B
Authority
CN
China
Prior art keywords
time
voice
target
lane change
voice packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210633665.0A
Other languages
Chinese (zh)
Other versions
CN114973740A (en
Inventor
韩雅娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210633665.0A priority Critical patent/CN114973740B/en
Publication of CN114973740A publication Critical patent/CN114973740A/en
Application granted granted Critical
Publication of CN114973740B publication Critical patent/CN114973740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/096855Systems involving transmission of navigation instructions to the vehicle where the output is provided in a suitable form to the driver
    • G08G1/096872Systems involving transmission of navigation instructions to the vehicle where the output is provided in a suitable form to the driver where instructions are given per voice
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/096877Systems involving transmission of navigation instructions to the vehicle where the input to the navigation device is provided by a suitable I/O arrangement
    • G08G1/096883Systems involving transmission of navigation instructions to the vehicle where the input to the navigation device is provided by a suitable I/O arrangement where input information is obtained using a mobile device, e.g. a mobile phone, a PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0602Systems characterised by the synchronising information used
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The disclosure provides a method and a device for determining voice broadcasting time and electronic equipment, relates to the technical field of computers, and particularly relates to the fields of voice technology and intelligent transportation. The specific implementation scheme is as follows: acquiring a first time advance for voice broadcasting based on a target voice packet; acquiring a voice broadcasting time difference between a current voice packet used by a navigated object and the target voice packet; determining the lane change advancing time of the navigated object based on the driving data of the navigated object; and determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcasting time difference and the lane change advance time.

Description

Method and device for determining voice broadcasting time and electronic equipment
Technical Field
The disclosure relates to the technical field of computers, in particular to the field of voice technology and intelligent traffic, and specifically relates to a method and a device for determining voice broadcasting time and electronic equipment.
Background
In the driving process of the user, the map navigation application can be used for navigation, and in the navigation process, the map navigation application can broadcast navigation events to the user through voice, such as whether the user changes lanes, whether the user overspeed, whether the user is crowded, and the like, so that safer driving experience is provided for the user. Currently, map navigation applications rely more on distance from navigation events to calculate voice broadcast opportunities, which have a crucial impact on whether the user has enough time to make driving operations.
Disclosure of Invention
The disclosure provides a method and a device for determining voice broadcasting time and electronic equipment.
According to a first aspect of the present disclosure, there is provided a method for determining a voice broadcast opportunity, including:
acquiring a first time advance for voice broadcasting based on a target voice packet;
acquiring a voice broadcasting time difference between a current voice packet used by a navigated object and the target voice packet;
determining the lane change advancing time of the navigated object based on the driving data of the navigated object;
and determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcasting time difference and the lane change advance time.
According to a second aspect of the present disclosure, there is provided a device for determining a voice broadcast opportunity, including:
the first acquisition module is used for acquiring a first time advance for voice broadcasting based on the target voice packet;
the second acquisition module is used for acquiring the voice broadcasting time difference between the current voice packet used by the navigated object and the target voice packet;
the first determining module is used for determining the lane change advancing time of the navigated object based on the running data of the navigated object;
and the second determining module is used for determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcasting time difference and the lane change advance time.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
In the embodiment of the disclosure, the electronic device can pertinently adjust voice broadcasting occasions corresponding to different voice packages for different voice packages used by the navigated object, so that more timely and effective navigation voice broadcasting can be provided for the user, driving of the user is better assisted, and user experience of navigation application is effectively improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart illustrating a method for determining a voice broadcast opportunity according to a first embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a device for determining a voice broadcast opportunity according to a second embodiment of the present disclosure;
fig. 3 is a block diagram of an electronic device for implementing a method for determining a voice broadcast opportunity according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Referring to fig. 1, fig. 1 is a flowchart of a method for determining a voice broadcast opportunity according to an embodiment of the disclosure, as shown in fig. 1, the method includes the following steps:
step S101, a first time advance of voice broadcasting based on a target voice packet is obtained.
The method provided by the embodiment of the disclosure may be applied to electronic devices such as a mobile phone, a tablet computer, a vehicle-mounted terminal, and the like. The technical solutions provided by the embodiments of the present disclosure will be explained below with an electronic device as an execution body of the method.
The method provided by the embodiment of the disclosure may be that a navigation application in an electronic device is in a running state to navigate a navigated object (such as a vehicle). For example, the navigated object is a vehicle, and during the travel of the vehicle, the user may provide assistance to the driving of the vehicle by turning on a navigation application in the electronic device.
In this step, the electronic device obtains a first time advance for voice broadcasting based on the target voice packet. The first time advance of the target voice packet for voice broadcasting may be a time difference between a target voice packet broadcasting target address and a target address where a navigation object enters. For example, the target voice packet broadcast "the front intersection turns right" is a first moment, when the navigated object enters the front intersection and turns right to a second moment, the first time advance is a time difference between the first moment and the second moment. That is, the first time advance is how long the target voice packet will advance to perform voice broadcast on the target address.
In the embodiment of the disclosure, after determining the target voice packet, the electronic device may determine a first time advance of the voice broadcasting of the target voice packet based on the historical data of the target voice packet.
Optionally, the target voice packet may be any one of the following:
the voice packet with the most use times of the navigated object;
navigation applies a default voice package;
the navigation application is most used for voice packets.
For example, if the target voice packet is the voice packet with the most usage times of the navigated object, the first time advance of voice broadcasting of the target voice packet can be obtained based on the historical data of the navigated object on the target voice packet. Or, the target voice packet is a default voice packet of the navigation application, and for the default voice packet, the voice broadcasting speed and the voice broadcasting time advance of the target address can be considered as default, so that the electronic device can acquire the first time advance of the default voice packet for voice broadcasting. Alternatively, the target voice packet is the voice packet most used by the navigation application, and the electronic device may determine the first time advance of the voice broadcasting of the target voice packet based on the historical broadcasting data used by the target voice packet. Therefore, the target voice packet is not limited to a certain type of voice packet, and the types of the target voice packet are enriched.
Step S102, obtaining the voice broadcasting time difference between the current voice packet used by the navigated object and the target voice packet.
Alternatively, the current voice packet used by the navigated object may be a voice packet different from the target voice packet, for example, the target voice packet is a voice packet defaulted by the navigation application, and the current voice packet may be a voice packet actively selected by the navigated object, for example, a voice packet of a specific person performing voice broadcasting. In addition, in the case that the user starts the navigation application, the electronic device may determine the target voice packet based on the current voice packet selected by the user, so as to ensure that the target voice packet is different from the current voice packet.
In this step, after determining the current voice packet used by the navigation object, the electronic god may determine a second time advance of voice broadcasting by the current voice packet based on the historical broadcasting data of the current voice packet. For example, when the current voice packet broadcasts that the front intersection turns around, the current voice packet is a first history time, and when the history navigated object enters the front intersection, the current voice packet is a second history time, and the second time advance is a time difference value between the first history time and the second history time. Or the second time advance is the average time advance of the historical broadcasting time of the current voice packet relative to the target address.
After determining the second time advance of the current voice packet, the electronic device may determine a voice broadcast time difference between the current voice packet and the target voice packet based on the first time advance and the second time advance. For example, the voice broadcast time difference is an absolute value of a difference between the first time advance and the second time advance.
Step S103, determining the lane change advancing time of the navigated object based on the running data of the navigated object.
In this step, the electronic device may acquire historical driving data of the navigated object, and determine a lane change advance time of the navigated object based on the historical driving data. For example, when the voice packet prompts the navigated object to change tracks at the target position, the actual track change time of the navigated object is a first track change time, the navigation application estimates that the track change time of the navigated object entering the target position is a second track change time, and the track change advance time of the navigated object can be calculated based on the first track change time and the second track change time, for example, the track change advance time can be an absolute value of a difference value between the first track change time and the second track change time.
It should be noted that the lane change advance time may be an average lane change advance time calculated based on the historical driving data of the object to be navigated, or may be a lane change advance time calculated based on the last historical driving data, which is not specifically limited in the present disclosure.
Step S104, determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcasting time difference and the lane change advance time.
The voice broadcasting time may refer to a time of broadcasting the target voice, for example, how long to advance relative to the target address. For example, assume that the target voice broadcast statement is "front intersection right turn", which is also the target address; it will be appreciated that voice packets are typically voice played ahead of time at a distance from the target address, or at a time before the navigated object enters the target address. The voice broadcasting time is the time when the current voice packet performs voice broadcasting before the navigation object enters the target address.
In this step, after determining a first time advance for voice reporting based on a target voice packet, a voice broadcasting time difference between the current voice packet and the target voice packet, and a lane change advance time of a navigated object, the electronic device determines, based on these parameters, a voice broadcasting time of the navigated object using the current voice packet. For example, the voice broadcast opportunity is determined based on the sum of the first time advance, the voice backup time difference and the lane change advance time, or the voice broadcast opportunity may be determined based on the sum of the first time advance, the voice backup time difference and the lane change advance time and a target weight value, which is not excessively recited in the embodiments of the present disclosure.
In the embodiment of the disclosure, after determining the first time advance of voice reporting based on the target voice packet, the voice broadcasting time difference between the current voice packet and the target voice packet, and the lane change advance time of the navigated object, the electronic device may determine the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice reporting time difference, and the lane change advance time, and further perform voice broadcasting based on the voice broadcasting time. Therefore, the electronic equipment can pertinently adjust voice broadcasting occasions corresponding to different voice packages for different voice packages (namely different current voice packages) used by the navigated object, so that more timely and effective navigation voice broadcasting can be provided for the user, driving of the user is better assisted, and user experience of navigation application is effectively improved.
Optionally, the step S102 may include:
acquiring the speech rate of a current speech packet used by a navigated object;
determining a first broadcasting time based on the speech speed and the target broadcasting statement;
acquiring a second broadcasting time of the target voice packet broadcasting statement;
and determining a voice broadcasting time difference between the current voice packet used by the navigated object and the target voice packet based on the first broadcasting time and the second broadcasting time.
It can be understood that, under the condition that the navigated object starts the navigation application and uses the current voice packet, the electronic device can obtain the speech speed (or called broadcasting speech speed) of the current voice packet, and can calculate the first broadcasting time based on the speech speed and the target broadcasting statement broadcasted by the current voice packet. The target broadcast statement may be the last statement that the current voice packet is ready to broadcast. For example, if the speech speed is v1 and the target broadcast statement is "turn right at the front intersection", the electronic device may calculate the first broadcast time required for completing the current voice packet broadcast of the target broadcast statement based on the word count of the target broadcast statement and the speech speed v 1.
The electronic equipment obtains the speech speed v2 of the target voice packet, and can calculate the second broadcasting time required by the target voice packet to finish broadcasting the target broadcasting statement based on the word number of the target broadcasting statement and the speech speed v 2. Therefore, the time required by the current voice packet and the target voice packet to broadcast the target broadcast statement can be calculated for the same target broadcast statement.
Further, a difference value between the first broadcasting time and the second broadcasting time is obtained, and the difference value is used as a voice broadcasting time difference between the current voice packet and the target voice packet. It should be noted that the voice broadcast time difference may be a positive value or a negative value.
In the embodiment of the disclosure, the electronic device determines the voice broadcasting time difference between the current voice packet and the target voice packet by determining the time required for broadcasting the same target broadcasting statement by the current voice packet and the target voice packet, so that the electronic device can pointedly calculate the voice broadcasting time difference of different voice packets relative to the target voice packet according to the difference of the voice packets used by the navigated object, thereby ensuring the accuracy of the voice broadcasting time.
Optionally, the step S103 may include:
acquiring a historical lane change advancing distance of the navigated object and a target lane change advancing distance of the navigated object under the condition of using the target voice packet;
and acquiring the lane change advance time of the navigated object based on the historical lane change advance distance and the target lane change advance distance.
The lane-changing advance distance may be a distance between a position at which the object to be navigated changes lanes before entering the target position and the target position.
Alternatively, the historical lane-changing advance distance of the navigated object may be calculated based on historical driving data of the navigated object. For example, all lane change advance distances of the navigated object in the historical time period can be collected, the average lane change advance distance of the navigated object in the historical time period is calculated, and the average lane change advance distance is used as the historical lane change advance distance of the navigated object. The history lane change advance distance may be derived based on data acquired by the object to be navigated when the voice packet is used, or may be derived based on data acquired by the object to be navigated when the voice packet is not used, regardless of whether the voice packet is used by the object to be navigated.
The target lane change advance distance of the navigated object under the condition of using the target voice packet is obtained by collecting the historical driving data of the navigated object under the condition of using the target voice packet. For example, the target voice packet is a voice packet that is used by the navigated object most frequently, and the electronic device may acquire all historical driving data of the navigated object when the target voice packet is used, and determine the target lane change advance distance based on the historical driving data.
Further, based on the historical lane change advancing distance and the target lane change advancing distance, the lane change advancing time of the navigated object is obtained. For example, the lane-change advance time of the object to be navigated may be calculated based on the difference between the historical lane-change advance distance and the target lane-change advance distance and the travel speed of the object to be navigated. Alternatively, the lane change advancing time of the object to be navigated may be calculated based on the historical lane change advancing time and the traveling speed of the object to be navigated. Alternatively, the lane change advance time may be calculated in other manners, which is not specifically limited in this disclosure.
In the embodiment of the disclosure, the historical lane change advancing distance of the navigated object and the target lane change advancing distance of the navigated object under the condition of using the target voice packet are obtained, so that the lane change advancing time of the navigated object is determined. Therefore, the historical driving data of the navigated object and the driving data when the target voice packet is used are considered in the acquisition of the lane changing advance time, the accuracy of the lane changing advance time is effectively improved, and the accuracy of the voice broadcasting time can be further ensured. And aiming at different users, the historical lane change advance preference and other personalized features of the different users can be considered, so that voice reporting and standby time is adjusted, more personalized and effective navigation voice broadcasting is provided for the different users, the unreasonable lane change situation caused by inaccurate voice broadcasting time is reduced, the driving safety is improved, and meanwhile, the trust and safety of the users to navigation application can be improved.
Optionally, the obtaining the historical lane change advance distance of the navigated object includes:
acquiring the type of the road where the navigated object is currently located;
and acquiring the historical lane change advancing distance of the navigated object on the current road type.
The road type may include expressways, urban roads, and the like.
In the embodiment of the disclosure, the historical lane change advancing distance of the navigated object on the current road type is determined according to the current road type of the navigated object. That is, if the road type where the navigation object is currently located is an expressway, only the historical driving data of the navigation object on the expressway is obtained to determine the historical lane change advance distance of the navigation object on the expressway; if the current road type is the urban road, only the historical driving data of the navigated object on the urban road is obtained to determine the historical lane change advance distance of the navigated object on the urban road.
It will be appreciated that the speed of travel of the navigated object will also vary across different road types. For example, the traveling speed of the navigated object on the expressway is higher than the traveling speed on the urban road without taking traffic congestion into consideration. The lane change advancing distances of the navigated object on different road types are different based on the difference of the driving speeds. According to the method and the device for calculating the historical lane change advance distance, the historical driving data of the navigated object on the current road type can be obtained pertinently according to the current road type of the navigated object, so that the historical lane change advance distance of the navigated object on the road type is calculated, the accuracy of the historical lane change distance is improved more effectively, and inaccuracy of calculation of the historical lane change advance distance caused by mixed use of the historical driving data corresponding to all road types under the condition that the road types are not distinguished can be avoided.
Optionally, the obtaining the lane change advance time of the navigated object based on the historical lane change advance distance and the target lane change advance distance includes:
acquiring a distance difference value between the historical lane change advancing distance and the target lane change advancing distance;
and determining the lane change advancing time of the navigated object based on the distance difference value and the current running speed of the navigated object.
In the embodiment of the disclosure, after determining a historical lane change advance distance of a navigated object and a target lane change advance distance of the navigated object under the condition of using a target voice packet, a distance difference between the historical lane change advance distance and the target lane change advance distance is obtained, and a quotient of the distance difference and a current running speed of the navigated object, namely lane change advance time of the navigated object, is calculated. In this way, the lane change advance time also considers the current running speed of the navigated object, the historical lane change advance distance and the target lane change advance distance under the condition of using the target voice packet, so as to ensure the accuracy of the lane change advance time.
Optionally, the step S104 may include:
acquiring the sum of the first time advance, the voice broadcasting time difference and the lane change advance time;
and determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the sum value.
In the embodiment of the present disclosure, the first time advance is a time advance of performing voice broadcasting on a target voice packet, where the time advance is represented by a time value, for example, the first time advance is t0; the voice broadcasting time difference is a time difference between the time when the navigation object uses the current voice packet to carry out voice broadcasting aiming at the same target broadcasting statement and the time when the navigation object uses the target voice packet to carry out voice broadcasting, namely the voice broadcasting time difference is also a time value, for example, the voice broadcasting time difference is t1; the lane change advance time is a time for the object to be navigated to advance lane change relative to the target position, for example, the lane change advance time is T2, then the sum t=t0+t1+t2, and the voice broadcasting time of the object to be navigated when the current voice packet is used is determined according to the sum T. It should be noted that, T1 and T2 may be positive or negative, and then T may be positive or negative.
It should be noted that, after determining the voice broadcasting opportunity, the electronic device may perform voice broadcasting on the statement to be broadcasted based on the voice broadcasting opportunity. For example, when the broadcasting time of the statement to be broadcasted is T1 and the voice broadcasting time is T, the electronic device may broadcast the statement to be broadcasted at the time of t1+t. Wherein, T may be a positive value or a negative value, and the adjusted voice broadcasting time may be earlier than the original broadcasting time T1 or may be later than the original broadcasting time T1. Therefore, the voice broadcasting time of the current voice packet can be adjusted based on the voice broadcasting time, so that the use habit of the user on the current voice packet can be more fitted, the voice broadcasting of the current voice packet is more personalized and intelligent, and more reasonable navigation service is provided for the user.
Optionally, the method further comprises:
and under the condition that the statement to be broadcasted is determined, broadcasting the statement to be broadcasted based on target time by using the current voice packet, wherein the target time is the sum value.
For example, the sum value t=t0+t1+t2 may be t1+t if the broadcasting time of the statement to be broadcasted is T1, that is, the current voice packet broadcasts the statement to be broadcasted based on t1+t.
It should be noted that, the voice broadcasting timing may be adjusted in real time, for example, based on a change of the traveling speed of the object to be navigated, the lane change advance time may be changed, and thus the voice broadcasting timing may also be changed. The electronic device may collect parameters in real time, such as a driving speed of the navigated object, a road type of the navigated object, a voice broadcast time difference, and so on, so as to adjust the voice broadcast time of the current voice packet in real time. Therefore, the accuracy of the voice broadcasting time can be effectively improved, and accurate and effective navigation service can be provided for the user.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a device for determining a voice broadcast opportunity according to an embodiment of the present disclosure, and as shown in fig. 2, the device 200 for determining a voice broadcast opportunity includes:
a first obtaining module 201, configured to obtain a first time advance for performing voice broadcasting based on a target voice packet;
a second obtaining module 202, configured to obtain a voice broadcast time difference between a current voice packet used by the navigated object and the target voice packet;
a first determining module 203, configured to determine a lane change advance time of the navigated object based on the driving data of the navigated object;
the second determining module 204 is configured to determine a voice broadcast opportunity of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcast time difference, and the lane change advance time.
Optionally, the second obtaining module 202 is further configured to:
acquiring the speech rate of a current speech packet used by a navigated object;
determining a first broadcasting time based on the speech speed and the target broadcasting statement;
acquiring a second broadcasting time of the target voice packet broadcasting statement;
and determining a voice broadcasting time difference between the current voice packet used by the navigated object and the target voice packet based on the first broadcasting time and the second broadcasting time.
Optionally, the first determining module 203 includes:
the first acquisition unit is used for acquiring the historical lane change advancing distance of the navigated object and the target lane change advancing distance of the navigated object under the condition of using the target voice packet;
and the second acquisition unit is used for acquiring the lane change advancing time of the navigated object based on the historical lane change advancing distance and the target lane change advancing distance.
Optionally, the first obtaining unit is further configured to:
acquiring the type of the road where the navigated object is currently located;
and acquiring the historical lane change advancing distance of the navigated object on the current road type.
Optionally, the first determining module 203 is further configured to:
acquiring a distance difference value between the historical lane change advancing distance and the target lane change advancing distance;
and determining the lane change advancing time of the navigated object based on the distance difference value and the current running speed of the navigated object.
Optionally, the second determining module 204 is further configured to:
acquiring the sum of the first time advance, the voice broadcasting time difference and the lane change advance time;
and determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the sum value.
Optionally, the apparatus further comprises:
and the broadcasting module is used for broadcasting the statement to be broadcasted based on target time by using the current voice packet under the condition of determining the statement to be broadcasted, wherein the target time is the sum value.
Optionally, the target voice packet is any one of the following:
the voice packet with the most use times of the navigated object;
navigation applies a default voice package;
the navigation application is most used for voice packets.
In the embodiment of the disclosure, the device can purposefully adjust the voice broadcasting time corresponding to different voice packets based on different voice packets used by the navigated object, so as to provide more timely and effective navigation voice broadcasting for the user, thereby better assisting the user in driving and effectively improving the user experience of the navigation application.
It should be noted that, the device provided in the embodiment of the present disclosure can implement all the processes in the embodiment of the method described in fig. 1, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 3 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 3, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in electronic device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, for example, a method of determining a voice broadcast opportunity. For example, in some embodiments, the method of determining a voice broadcast opportunity may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the above-described method of determining a voice broadcast opportunity may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the above-described method of determining the timing of the voice broadcast in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method for determining voice broadcasting time comprises the following steps:
acquiring a first time advance for voice broadcasting based on a target voice packet;
acquiring a voice broadcasting time difference between a current voice packet used by a navigated object and the target voice packet, wherein the voice broadcasting time difference is determined based on the time spent by the current voice packet and the target voice packet broadcasting the same target broadcasting statement;
determining a lane change advance time of the navigated object based on the travel data of the navigated object, the lane change advance time being determined based on a historical lane change advance distance of the navigated object and a target lane change advance distance in the case of using the target voice packet;
and determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcasting time difference and the lane change advance time.
2. The method of claim 1, wherein the obtaining the historical lane-change advance distance of the navigated object comprises:
acquiring the type of the road where the navigated object is currently located;
and acquiring the historical lane change advancing distance of the navigated object on the current road type.
3. The method of claim 1, wherein the obtaining the lane-change advance time for the navigated object based on the historical lane-change advance distance and the target lane-change advance distance comprises:
acquiring a distance difference value between the historical lane change advancing distance and the target lane change advancing distance;
and determining the lane change advancing time of the navigated object based on the distance difference value and the current running speed of the navigated object.
4. The method of any of claims 1-3, wherein the determining a voice broadcast opportunity of the navigated object using the current voice package based on the first time advance, the voice broadcast time difference, and the lane change advance time comprises:
acquiring the sum of the first time advance, the voice broadcasting time difference and the lane change advance time;
and determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the sum value.
5. The method of claim 4, wherein the method further comprises:
and under the condition that the statement to be broadcasted is determined, broadcasting the statement to be broadcasted based on target time by using the current voice packet, wherein the target time is the sum value.
6. A method according to any one of claims 1-3, wherein the target speech packet is any one of:
the voice packet with the most use times of the navigated object;
navigation applies a default voice package;
the navigation application is most used for voice packets.
7. A device for determining a voice broadcast opportunity, comprising:
the first acquisition module is used for acquiring a first time advance for voice broadcasting based on the target voice packet;
the second acquisition module is used for acquiring a voice broadcasting time difference between a current voice packet used by a navigated object and the target voice packet, wherein the voice broadcasting time difference is determined based on the time spent by the current voice packet and the target voice packet for broadcasting the same target broadcasting statement;
the first determining module is used for determining lane change advance time of the navigated object based on the running data of the navigated object, wherein the lane change advance time is determined based on the historical lane change advance distance of the navigated object and the target lane change advance distance under the condition of using the target voice packet;
and the second determining module is used for determining the voice broadcasting time of the navigated object under the condition of using the current voice packet based on the first time advance, the voice broadcasting time difference and the lane change advance time.
8. The apparatus of claim 7, wherein the first determination module is further to:
acquiring the type of the road where the navigated object is currently located;
and acquiring the historical lane change advancing distance of the navigated object on the current road type.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202210633665.0A 2022-06-06 2022-06-06 Method and device for determining voice broadcasting time and electronic equipment Active CN114973740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210633665.0A CN114973740B (en) 2022-06-06 2022-06-06 Method and device for determining voice broadcasting time and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210633665.0A CN114973740B (en) 2022-06-06 2022-06-06 Method and device for determining voice broadcasting time and electronic equipment

Publications (2)

Publication Number Publication Date
CN114973740A CN114973740A (en) 2022-08-30
CN114973740B true CN114973740B (en) 2023-09-12

Family

ID=82958929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210633665.0A Active CN114973740B (en) 2022-06-06 2022-06-06 Method and device for determining voice broadcasting time and electronic equipment

Country Status (1)

Country Link
CN (1) CN114973740B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH041898A (en) * 1990-04-18 1992-01-07 Sumitomo Electric Ind Ltd Voice guiding equipment
CN102322866A (en) * 2011-07-04 2012-01-18 深圳市子栋科技有限公司 Navigation method and system based on natural speech recognition
CN110018806A (en) * 2018-11-22 2019-07-16 阿里巴巴集团控股有限公司 A kind of method of speech processing and device
CN110277092A (en) * 2019-06-21 2019-09-24 北京猎户星空科技有限公司 A kind of voice broadcast method, device, electronic equipment and readable storage medium storing program for executing
CN112118527A (en) * 2019-06-19 2020-12-22 华为技术有限公司 Multimedia information processing method, device and storage medium
CN113380229A (en) * 2021-06-08 2021-09-10 阿波罗智联(北京)科技有限公司 Voice response speed determination method, related device and computer program product
WO2021232726A1 (en) * 2020-05-22 2021-11-25 百度在线网络技术(北京)有限公司 Navigation audio playback method, apparatus and device, and computer storage medium
CN114184197A (en) * 2020-09-15 2022-03-15 阿里巴巴集团控股有限公司 Navigation voice broadcasting method, equipment, system and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH041898A (en) * 1990-04-18 1992-01-07 Sumitomo Electric Ind Ltd Voice guiding equipment
CN102322866A (en) * 2011-07-04 2012-01-18 深圳市子栋科技有限公司 Navigation method and system based on natural speech recognition
CN110018806A (en) * 2018-11-22 2019-07-16 阿里巴巴集团控股有限公司 A kind of method of speech processing and device
CN112118527A (en) * 2019-06-19 2020-12-22 华为技术有限公司 Multimedia information processing method, device and storage medium
CN110277092A (en) * 2019-06-21 2019-09-24 北京猎户星空科技有限公司 A kind of voice broadcast method, device, electronic equipment and readable storage medium storing program for executing
WO2021232726A1 (en) * 2020-05-22 2021-11-25 百度在线网络技术(北京)有限公司 Navigation audio playback method, apparatus and device, and computer storage medium
CN114184197A (en) * 2020-09-15 2022-03-15 阿里巴巴集团控股有限公司 Navigation voice broadcasting method, equipment, system and storage medium
CN113380229A (en) * 2021-06-08 2021-09-10 阿波罗智联(北京)科技有限公司 Voice response speed determination method, related device and computer program product

Also Published As

Publication number Publication date
CN114973740A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN113071493B (en) Method, apparatus, storage medium and program product for lane change control of vehicle
CN112700667A (en) Method, apparatus, electronic device, and medium for assisting vehicle driving
CN113135193B (en) Method, device, storage medium and program product for outputting early warning information
CN112560680A (en) Lane line processing method and device, electronic device and storage medium
WO2023273780A1 (en) Vehicle positioning method and apparatus, electronic device, and storage medium
CN114625744A (en) Updating method and device of electronic map
CN113899381A (en) Method, apparatus, device, medium and product for generating route information
CN114973740B (en) Method and device for determining voice broadcasting time and electronic equipment
CN111951583A (en) Prompting method and electronic equipment
CN112866915B (en) Navigation information processing method and device, electronic equipment and storage medium
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN113450794B (en) Navigation broadcasting detection method and device, electronic equipment and medium
CN115876216A (en) Lane-changing navigation path planning method and device, electronic equipment and storage medium
CN114689069A (en) Navigation route processing method and device of automatic driving equipment and electronic equipment
CN114889587A (en) Method, device, equipment and medium for determining speed of passenger-replacing parking
CN114689061A (en) Navigation route processing method and device of automatic driving equipment and electronic equipment
CN114252086A (en) Prompt message output method, device, equipment, medium and vehicle
CN114419593A (en) Information processing method, device, equipment and storage medium
CN114179805A (en) Driving direction determining method, device, equipment and storage medium
CN115294764B (en) Crosswalk area determination method, crosswalk area determination device, crosswalk area determination equipment and automatic driving vehicle
CN112735130A (en) Traffic data processing method and device, electronic equipment and medium
CN114419876B (en) Road saturation evaluation method and device, electronic equipment and storage medium
CN115507866B (en) Map data processing method and device, electronic equipment and medium
CN116403455B (en) Vehicle control method, device, equipment and storage medium
EP4095542A2 (en) Positioning method, on-board device,terminal device and positioning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant