CN111578965A - Navigation broadcast information processing method and device, electronic equipment and storage medium - Google Patents

Navigation broadcast information processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111578965A
CN111578965A CN202010366926.8A CN202010366926A CN111578965A CN 111578965 A CN111578965 A CN 111578965A CN 202010366926 A CN202010366926 A CN 202010366926A CN 111578965 A CN111578965 A CN 111578965A
Authority
CN
China
Prior art keywords
user
voice
navigation
scene
auxiliary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010366926.8A
Other languages
Chinese (zh)
Other versions
CN111578965B (en
Inventor
吴迪
黄际洲
丁世强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010366926.8A priority Critical patent/CN111578965B/en
Publication of CN111578965A publication Critical patent/CN111578965A/en
Application granted granted Critical
Publication of CN111578965B publication Critical patent/CN111578965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3697Output of additional, non-guidance related information, e.g. low fuel level

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The application discloses a navigation broadcast information processing method and device, electronic equipment and a readable storage medium, and relates to the technical field of computer vision. The specific implementation scheme is as follows: determining a user travel scene in the user navigation process; determining whether the user travel scene is a promotion scene of a brand party; if so, broadcasting navigation assistance voice for the user; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet. The technology of the application improves the matching degree of the navigation auxiliary voice and the user requirements, thereby improving the conversion rate of the brand advertisement, avoiding resource waste and even improving the navigation experience of the user.

Description

Navigation broadcast information processing method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of map application, in particular to the technical field of voice, and specifically relates to a navigation broadcast information processing method and device, electronic equipment and a storage medium.
Background
The navigation system provides a navigation function for a user by utilizing information such as position, speed and time provided by the positioning system and matching with the route planning capability of a high-precision navigation electronic map, helps the user accurately plan a driving route on the electronic map in real time, and guides the user to drive according to the planned route to reach a destination.
The voice navigation is an intelligent voice technology represented by voice recognition and voice coding and decoding, and provides navigation auxiliary information for a user in the navigation process. The existing voice navigation broadcasting mode is single, the matching degree of the broadcast information and the user requirement is low, navigation resources are consumed, and the user requirement cannot be met.
Disclosure of Invention
Provided are a method, apparatus, device, and storage medium for navigation broadcast information processing.
According to a first aspect, there is provided a navigation broadcast information processing method, including:
determining a user travel scene in the user navigation process;
determining whether the user travel scene is a promotion scene of a brand party;
if so, broadcasting navigation assistance voice for the user; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
According to a second aspect, there is provided a navigation broadcast information processing apparatus including:
the travel scene determining module is used for determining a user travel scene in the user navigation process;
the promotion scene determining module is used for determining whether the user trip scene is a promotion scene of a brand party;
the voice broadcasting module is used for broadcasting navigation auxiliary voice for the user if the user trip scene is a promotion scene of a brand party; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
According to a third aspect, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the navigation report information processing method according to any one of the embodiments of the present application.
According to a fourth aspect, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the navigation broadcast information processing method according to any one of the embodiments of the present application.
According to the technology of the application, the matching degree of the navigation auxiliary voice and the user requirements is improved, so that the conversion rate of the brand advertisement is improved, the resource waste is avoided, and the navigation experience of the user can be improved even without sacrificing.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flow chart of a navigation broadcast information processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a navigation broadcast information processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a voice recording interface provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of a voice recording interface provided in accordance with an embodiment of the present application;
fig. 5 is a schematic structural diagram of a navigation broadcast information processing device according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a navigation broadcast information processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flow chart of a navigation broadcast information processing method according to an embodiment of the present application. The embodiment can be applied to the condition that the user is assisted to go out through the navigation auxiliary voice in the navigation process. The navigation broadcast information processing method disclosed in this embodiment may be executed by an electronic device, and specifically may be executed by a navigation broadcast information processing apparatus, where the apparatus may be implemented in a software and/or hardware manner and configured in the electronic device. Referring to fig. 1, the navigation broadcast information processing method provided in this embodiment includes:
and S110, determining a user travel scene in the user navigation process.
The user travel scene refers to a travel state of the user in the navigation process, and can be determined according to user characteristic information in the user navigation process. Specifically, a plurality of candidate travel scenes may be provided in advance, and in the user navigation process, the user characteristic information is acquired in real time, and the user travel scene is selected from the candidate travel scenes according to the user characteristic information.
In an alternative embodiment, S110 includes: determining a user trip scene according to at least one of the vehicle registration information, the navigation route information, the navigation environment information and the real-time road condition information of the user; wherein the user travel scene comprises at least one of the following: the method comprises the following steps of starting a navigation scene, a long-distance driving scene, a congestion scene, a refueling scene, a service area scene, a night driving scene and an ending navigation scene. It should be noted that, in the embodiment of the present application, the candidate travel scene is not specifically limited, and for example, the candidate travel scene may be customized for the promotion demand of a brand party.
The vehicle registration information refers to registration information of a vehicle in a navigation system, such as whether the vehicle type is an oil vehicle or a new energy vehicle, whether the vehicle type is a local port license plate number or not, and the like; the navigation route information refers to navigation route information generated in response to a navigation request including a user start point and a user end point, such as a navigation destination, a navigation mileage, and the like; the navigation environment information refers to a user's traveling environment, for example, time information such as day, night, and passing peak, weather information such as rain, snow, fog, and flying dust, road type information such as high speed, urban road, and rural road, and the like. The real-time traffic information is dynamically acquired in the navigation process, and may include remaining mileage, remaining travel time, traffic congestion type, and the like. By accurately determining the user travel scene, the matching degree between the subsequent navigation auxiliary voice and the user requirement can be improved.
And S120, determining whether the user travel scene is a promotion scene of a brand party.
Specifically, an association relationship between a brand party and at least one candidate trip scene may be pre-established; accordingly, whether the user travel scene is a promotion scene of a brand party is determined based on the incidence relation. The trip scene is used as a link between the user and the brand party, the user demand and the use scene of the brand party are combined to determine the promotion opportunity of the brand party, the promotion information can meet the user trip demand, and therefore the promotion conversion rate of the brand party and the acceptance of the user can be improved.
In an alternative embodiment, S120 includes: and determining whether the user travel scene is a promotion scene of the brand party according to the association degree between the industry type of the brand party and the user travel scene. Specifically, according to the industry type of the brand party, the association degree between the brand party and the candidate trip scenes can be determined, and N (where N is a natural number) candidate trip scenes with the previous association degree are selected as the promotion scenes of the brand party. For example, a refueling scene or a long distance driving scene may be used as a promotion scene for a certain oil brand party, and a service area scene may be used as a promotion scene for a certain beverage brand party. Whether the user travel scene is the promotion scene of the brand party or not is determined according to the relevance between the brand party and the user travel scene, the matching degree between the brand party promotion and the user demand can be improved, and the brand promotion damage to the user use experience is avoided.
S130, if yes, broadcasting navigation auxiliary voice for the user; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
The user voice packet refers to a voice packet which is customized for the user according to the recorded voice of the user in advance. The embodiment of the application generates the navigation auxiliary voice based on the personalized user voice packet, and compared with the navigation auxiliary voice based on the general voice packet such as star and net red, the method and the device not only can reduce the production cost of the voice packet, but also can improve the retention rate and the use frequency of the voice packet.
And the navigation auxiliary voice is generated by combining the advertisement words of the brand party and the user trip scene, so that the popularization requirement of the brand party can be met, and the trip requirement of the user can also be met. Through combining the advertisement of brand side with user's trip scene strongly, compare in comparatively fixed advertisement hard implantation, can improve advertisement conversion rate on the basis of not sacrificing user experience.
It should be noted that in the embodiment of the present application, brand promotion is performed only when the user travel scene is a promotion scene of a brand party, and at this time, a navigation assistance voice generated according to an advertisement word of the brand party and the user travel scene is played; when the user trip scene is not the promotion scene of the brand party, the brand promotion is not carried out, and at the moment, the navigation auxiliary information generated only according to the user trip scene is played, so that the user is prevented from being interfered. In addition, the number of the brand parties is not specifically limited, one user travel scene can be associated with one brand party, and different brand parties can also be associated; different user travel scenes can be associated with the same brand party or different brand parties.
In an optional implementation, after broadcasting the navigation assistance voice for the user, the method further includes: and adjusting the broadcasting frequency of the navigation auxiliary voice according to the feedback information of the user to the navigation auxiliary voice. Specifically, feedback behavior information of the user is monitored through means such as log analysis, dynamic control of the broadcast frequency is performed, for example, if it is detected that the negative feedback frequency of the user to a certain navigation assistance voice is greater than a frequency threshold value, the broadcast frequency of the navigation assistance voice is reduced, and therefore reduction of user experience is avoided.
In an optional implementation, after broadcasting the navigation assistance voice for the user, the method further includes: and adjusting the promotion scene of the brand party according to the feedback information of the user to the navigation auxiliary voice. Specifically, for a certain brand party, feedback behavior information of the user on at least two candidate scenes is monitored, and the popularization scene of the brand party is adjusted according to the feedback effect.
In addition, the navigation auxiliary texts can be adjusted by combining the user feedback behaviors of different navigation auxiliary texts, so that the propaganda effect is maximized, and the disturbance to the user is effectively reduced.
According to the technical scheme, the user travel scene is combined with the popularization demand of the brand party, so that the navigation auxiliary voice can meet the popularization demand of the brand party and the travel demand of the user, and the accuracy of the navigation auxiliary voice is improved. And the navigation auxiliary voice is generated based on the voice packet of the user, so that the use frequency and the retention rate of the voice packet and the familiarity and the acceptance of the user on the navigation auxiliary voice can be improved.
Fig. 2 is a schematic flowchart of a navigation broadcast information processing method according to an embodiment of the present application. The present embodiment is an alternative proposed on the basis of the above-described embodiments. Referring to fig. 2, the navigation broadcast information processing method provided in this embodiment includes:
s210, providing a sentence to be recorded comprising the advertisement words of the brand party for the user through a voice recording interface.
In conjunction with fig. 3 and 4, a voice recording interface is presented to the user in response to the user engaging in a promotion operation on the branding party promotional interface. And providing a preset number (for example, 20) of sentences to be recorded in the voice recording interface for reading by the user. Wherein, the statement to be entered comprises at least one advertisement of the brand party. Through the bill-board of introducing the brand side in the pronunciation record process, can improve the familiarity of user to the brand side on the one hand, the audio frequency of on the other hand bill-board is direct to be recorded through the user and is obtained, compares in other pronunciation through synthesizing and obtains, can improve the degree of accuracy of bill-board, makes the user acceptance higher.
And S220, acquiring the user voice input by the user according to the statement to be input.
Referring to fig. 4, during the reading process of the user, the user voice is collected.
And S230, generating the user voice packet according to the user voice.
Specifically, the voice synthesis model is trained to obtain an acoustic model of the user through the tone color and the sounding characteristics of the recorded audio of the user, and a personalized user voice packet is generated based on the acoustic model of the user. Because the recorded sentences comprise the advertisement words of the brand, the subsequently generated navigation auxiliary voice has higher matching degree with the user, and the synthesis effect of the navigation auxiliary voice can be ensured.
In an alternative embodiment, S230 is followed by: and generating the head portrait of the user voice packet according to the identification information of the brand party. The identification information of the brand party comprises the name, the icon and the like of the brand party. By using the identification information of the brand party as the head portrait of the user voice packet, the familiarity of the user with the brand party can be further enhanced, so that the acceptance of the user to the navigation auxiliary voice generated according to the advertisement words of the brand party is improved.
S240, in the user navigation process, determining a user travel scene.
And S250, determining whether the user travel scene is a promotion scene of a brand party.
S260, if yes, broadcasting navigation auxiliary voice for the user; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
In an alternative embodiment, S260 further includes, before: generating a navigation auxiliary text according to the advertisement words of the brand party and the user travel scene; and generating the navigation auxiliary voice according to the navigation auxiliary text based on the user voice packet.
Specifically, the navigation aid is generated according to a user trip scene, and the navigation aid refers to information needed by a user in a driving process and is used for improving trip safety and convenience. The navigation aid words generated by different user travel scenes can be different. For example, the navigation assistance language may be start of navigation, congestion ahead, prompt for a camera, and the like. And processing the navigation auxiliary words and the advertisement words of the brand parties by adopting a natural language processing technology to obtain navigation auxiliary texts. For example, if the navigation aid language is "navigation end" and the advertisement language of the brand side is "a certain beverage, wish you good mood", the generated navigation aid text may be "safe arrival, a certain beverage, wish you good mood". For another example, the navigation aid language is "long distance driving, paying attention to rest", the advertisement language of the brand party is "a certain oil goes around you and guards your travel safety", and the generated navigation aid text may be "you have been driving for a long time, and a certain oil reminds you of paying attention to the remaining oil in time". Because the navigation auxiliary words used for meeting the trip requirements of the user and the advertisement words used for meeting the promotion requirements of the brand parties exist in the navigation auxiliary text, and the navigation auxiliary voices are generated according to the personalized user voice packet, the navigation auxiliary voices can meet the requirements of the user and the brand parties.
In an alternative embodiment, generating the navigation assistance speech from the navigation assistance text based on the user speech packet comprises: if the first auxiliary text in the navigation auxiliary text is successfully matched with the pre-input user voice, generating a first auxiliary voice based on the user voice; if the second auxiliary text in the navigation auxiliary text fails to be matched with the pre-input user voice, generating second auxiliary voice based on the user voice packet; and generating the navigation auxiliary voice according to the first auxiliary voice and the second auxiliary voice. By preferentially using the recorded voice of the user and then using the synthesized voice to generate the navigation auxiliary voice, the integral synthesis quality of the navigation auxiliary voice can be improved.
According to the technical scheme, the advertising language of the brand party is introduced in the user voice packet recording process, and in the user navigation process, the user travel scene matched with the brand party is used as the promotion scene of the brand party to generate the playing file, so that the advertisement putting and the user use are strongly combined, the advertising effect is maximized, and the disturbance to the user is effectively reduced.
Fig. 5 is a schematic structural diagram of a navigation broadcast information processing device according to an embodiment of the present application. Referring to fig. 5, the embodiment of the present application discloses a navigation broadcast information processing apparatus 300, where the apparatus 300 includes:
a travel scene determining module 301, configured to determine a user travel scene in a user navigation process;
a promotion scene determining module 302, configured to determine whether the user travel scene is a promotion scene of a brand party;
the voice broadcasting module 303 is configured to broadcast the navigation assistance voice for the user if the user travel scene is a promotion scene of a brand party; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
Optionally, the travel scene determining module 301 is specifically configured to:
determining a user trip scene according to at least one of the vehicle registration information, the navigation route information, the navigation environment information and the real-time road condition information of the user;
wherein the user travel scene comprises at least one of the following: the method comprises the following steps of starting a navigation scene, a long-distance driving scene, a congestion scene, a refueling scene, a service area scene, a night driving scene and an ending navigation scene.
Optionally, the apparatus further includes a voice packet generation module, where the voice packet generation module includes:
the sentence providing unit is used for providing a sentence to be input including the advertisement of the brand party to the user through a voice recording interface;
the voice recording unit is used for acquiring the user voice recorded by the user according to the statement to be recorded;
and the voice packet generating unit is used for generating the user voice packet according to the user voice.
Optionally, the apparatus further includes an auxiliary speech generation module, where the auxiliary speech generation module includes:
the auxiliary text generation unit is used for generating a navigation auxiliary text according to the advertisement words of the brand parties and the user travel scene;
and the auxiliary voice generating unit is used for generating the navigation auxiliary voice according to the navigation auxiliary text based on the user voice packet.
Optionally, the auxiliary speech generating unit is specifically configured to:
if the first auxiliary text in the navigation auxiliary text is successfully matched with the pre-input user voice, generating a first auxiliary voice based on the user voice;
if the second auxiliary text in the navigation auxiliary text fails to be matched with the pre-input user voice, generating second auxiliary voice based on the user voice packet;
and generating the navigation auxiliary voice according to the first auxiliary voice and the second auxiliary voice.
Optionally, the voice packet generating module further includes:
and the head portrait generating unit is used for generating the head portrait of the user voice packet according to the identification information of the brand party.
Optionally, the popularization scenario determining module 302 is specifically configured to:
and determining whether the user travel scene is a promotion scene of the brand party according to the association degree between the industry type of the brand party and the user travel scene.
Optionally, the apparatus further comprises:
and the adjusting module is used for adjusting the broadcasting frequency of the navigation auxiliary voice and/or adjusting the promotion scene of the brand party according to the feedback information of the user to the navigation auxiliary voice.
According to the technical scheme, the advertising language of the brand party is introduced in the user voice packet recording process, and in the user navigation process, the user travel scene matched with the brand party is used as the promotion scene of the brand party to generate the playing file, so that the advertisement putting and the user use are strongly combined, the advertising effect is maximized, and the disturbance to the user is effectively reduced.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, the embodiment of the present application is a block diagram of an electronic device for processing navigation broadcast information. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 401 is taken as an example.
Memory 402 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the method for processing the navigation broadcasting information provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method of navigation broadcast information processing provided by the present application.
The memory 402, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for processing navigation broadcast information in the embodiment of the present application (for example, the travel scene determining module 301, the promotion scene determining module 302, and the voice broadcast module 303 shown in fig. 5). The processor 401 executes various functional applications of the server and data processing, that is, implements the method of processing the navigation broadcast information in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 402.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device processed by the navigation broadcast information, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include a memory remotely located from the processor 401, and these remote memories may be connected to the electronic device for navigation announcement information processing through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for processing navigation broadcast information may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 6 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device that navigates the processing of broadcast information, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the advertising language of the brand party is introduced in the user voice packet recording process, and in the user navigation process, the user travel scene matched with the brand party is used as the promotion scene of the brand party to generate the playing file, so that the advertisement putting and the user use are strongly combined, the advertising effect is maximized, and the disturbance to the user is effectively reduced.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A navigation broadcasting information processing method is characterized by comprising the following steps:
determining a user travel scene in the user navigation process;
determining whether the user travel scene is a promotion scene of a brand party;
if so, broadcasting navigation assistance voice for the user; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
2. The method of claim 1, wherein the determining a user travel scenario comprises:
determining a user trip scene according to at least one of the vehicle registration information, the navigation route information, the navigation environment information and the real-time road condition information of the user;
wherein the user travel scene comprises at least one of the following: the method comprises the following steps of starting a navigation scene, a long-distance driving scene, a congestion scene, a refueling scene, a service area scene, a night driving scene and an ending navigation scene.
3. The method of claim 1, wherein prior to broadcasting the navigational assistance voice for the user, further comprising:
providing a statement to be recorded comprising the advertisement of the brand party to the user through a voice recording interface;
acquiring user voice input by the user according to the statement to be input;
and generating the user voice packet according to the user voice.
4. The method of claim 1 or 3, wherein before broadcasting the navigation assistance voice for the user, further comprising:
generating a navigation auxiliary text according to the advertisement words of the brand party and the user travel scene;
and generating the navigation auxiliary voice according to the navigation auxiliary text based on the user voice packet.
5. The method of claim 4, wherein generating the navigation assistance speech from the navigation assistance text based on the user speech packet comprises:
if the first auxiliary text in the navigation auxiliary text is successfully matched with the pre-input user voice, generating a first auxiliary voice based on the user voice;
if the second auxiliary text in the navigation auxiliary text fails to be matched with the pre-input user voice, generating second auxiliary voice based on the user voice packet;
and generating the navigation auxiliary voice according to the first auxiliary voice and the second auxiliary voice.
6. The method according to claim 3, further comprising, after generating the user speech packet according to the user speech:
and generating the head portrait of the user voice packet according to the identification information of the brand party.
7. The method of claim 1, wherein determining whether the user travel scenario is a promotion scenario of a brand party comprises:
and determining whether the user travel scene is a promotion scene of the brand party according to the association degree between the industry type of the brand party and the user travel scene.
8. The method of claim 1, wherein after broadcasting the navigational assistance voice for the user, further comprising:
and adjusting the broadcasting frequency of the navigation auxiliary voice and/or adjusting the promotion scene of the brand party according to the feedback information of the user to the navigation auxiliary voice.
9. A navigation broadcast information processing apparatus, comprising:
the travel scene determining module is used for determining a user travel scene in the user navigation process;
the promotion scene determining module is used for determining whether the user trip scene is a promotion scene of a brand party;
the voice broadcasting module is used for broadcasting navigation auxiliary voice for the user if the user trip scene is a promotion scene of a brand party; and the navigation auxiliary voice is generated according to the advertisement words of the brand parties and the user travel scene based on a user voice packet.
10. The apparatus of claim 9, wherein the travel scenario determination module is specifically configured to:
determining a user trip scene according to at least one of the vehicle registration information, the navigation route information, the navigation environment information and the real-time road condition information of the user;
wherein the user travel scene comprises at least one of the following: the method comprises the following steps of starting a navigation scene, a long-distance driving scene, a congestion scene, a refueling scene, a service area scene, a night driving scene and an ending navigation scene.
11. The apparatus of claim 9, further comprising a voice packet generation module, the voice packet generation module comprising:
the sentence providing unit is used for providing a sentence to be input including the advertisement of the brand party to the user through a voice recording interface;
the voice recording unit is used for acquiring the user voice recorded by the user according to the statement to be recorded;
and the voice packet generating unit is used for generating the user voice packet according to the user voice.
12. The apparatus of claim 9 or 11, further comprising an auxiliary speech generation module, the auxiliary speech generation module comprising:
the auxiliary text generation unit is used for generating a navigation auxiliary text according to the advertisement words of the brand parties and the user travel scene;
and the auxiliary voice generating unit is used for generating the navigation auxiliary voice according to the navigation auxiliary text based on the user voice packet.
13. The apparatus according to claim 12, wherein the auxiliary speech generating unit is specifically configured to:
if the first auxiliary text in the navigation auxiliary text is successfully matched with the pre-input user voice, generating a first auxiliary voice based on the user voice;
if the second auxiliary text in the navigation auxiliary text fails to be matched with the pre-input user voice, generating second auxiliary voice based on the user voice packet;
and generating the navigation auxiliary voice according to the first auxiliary voice and the second auxiliary voice.
14. The apparatus of claim 11, wherein the voice packet generating module further comprises:
and the head portrait generating unit is used for generating the head portrait of the user voice packet according to the identification information of the brand party.
15. The apparatus of claim 9, wherein the promotion scenario determination module is specifically configured to:
and determining whether the user travel scene is a promotion scene of the brand party according to the association degree between the industry type of the brand party and the user travel scene.
16. The apparatus of claim 9, further comprising:
and the adjusting module is used for adjusting the broadcasting frequency of the navigation auxiliary voice and/or adjusting the promotion scene of the brand party according to the feedback information of the user to the navigation auxiliary voice.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202010366926.8A 2020-04-30 2020-04-30 Navigation broadcast information processing method and device, electronic equipment and storage medium Active CN111578965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010366926.8A CN111578965B (en) 2020-04-30 2020-04-30 Navigation broadcast information processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010366926.8A CN111578965B (en) 2020-04-30 2020-04-30 Navigation broadcast information processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111578965A true CN111578965A (en) 2020-08-25
CN111578965B CN111578965B (en) 2022-07-08

Family

ID=72122803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010366926.8A Active CN111578965B (en) 2020-04-30 2020-04-30 Navigation broadcast information processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111578965B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112133281A (en) * 2020-09-15 2020-12-25 北京百度网讯科技有限公司 Voice broadcasting method and device, electronic equipment and storage medium
CN112269864A (en) * 2020-10-15 2021-01-26 北京百度网讯科技有限公司 Method, device and equipment for generating broadcast voice and computer storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534315A (en) * 2008-03-14 2009-09-16 深圳市莱科电子技术有限公司 Advertising information issuing system combined with positioning navigation
CN104296771A (en) * 2014-10-14 2015-01-21 广东翼卡车联网服务有限公司 Method and system for announcing prompt information at vehicle-mounted navigation client
US20160069706A1 (en) * 2012-08-24 2016-03-10 Google Inc. Method for providing directions in a mapping application utilizing landmarks associated with brand advertising
CN105606117A (en) * 2014-11-18 2016-05-25 深圳市腾讯计算机系统有限公司 Navigation prompting method and navigation prompting apparatus
CN105928536A (en) * 2016-06-15 2016-09-07 苏州清研捷运信息科技有限公司 Method for embedding position-based video advertisements into vehicle navigation
CN107289964A (en) * 2016-03-31 2017-10-24 高德信息技术有限公司 One kind navigation voice broadcast method and device
JP2018028533A (en) * 2017-07-05 2018-02-22 ヤフー株式会社 Navigation program, advertisement management server, and advertisement management method
CN110017847A (en) * 2019-03-21 2019-07-16 腾讯大地通途(北京)科技有限公司 A kind of adaptive navigation voice broadcast method, apparatus and system
CN110580631A (en) * 2018-06-07 2019-12-17 北京奇虎科技有限公司 advertisement putting method and device
CN110751940A (en) * 2019-09-16 2020-02-04 百度在线网络技术(北京)有限公司 Method, device, equipment and computer storage medium for generating voice packet
CN110781657A (en) * 2019-10-14 2020-02-11 百度在线网络技术(北京)有限公司 Management method, device and equipment for navigation broadcasting

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534315A (en) * 2008-03-14 2009-09-16 深圳市莱科电子技术有限公司 Advertising information issuing system combined with positioning navigation
US20160069706A1 (en) * 2012-08-24 2016-03-10 Google Inc. Method for providing directions in a mapping application utilizing landmarks associated with brand advertising
CN104296771A (en) * 2014-10-14 2015-01-21 广东翼卡车联网服务有限公司 Method and system for announcing prompt information at vehicle-mounted navigation client
CN105606117A (en) * 2014-11-18 2016-05-25 深圳市腾讯计算机系统有限公司 Navigation prompting method and navigation prompting apparatus
CN107289964A (en) * 2016-03-31 2017-10-24 高德信息技术有限公司 One kind navigation voice broadcast method and device
CN105928536A (en) * 2016-06-15 2016-09-07 苏州清研捷运信息科技有限公司 Method for embedding position-based video advertisements into vehicle navigation
JP2018028533A (en) * 2017-07-05 2018-02-22 ヤフー株式会社 Navigation program, advertisement management server, and advertisement management method
CN110580631A (en) * 2018-06-07 2019-12-17 北京奇虎科技有限公司 advertisement putting method and device
CN110017847A (en) * 2019-03-21 2019-07-16 腾讯大地通途(北京)科技有限公司 A kind of adaptive navigation voice broadcast method, apparatus and system
CN110751940A (en) * 2019-09-16 2020-02-04 百度在线网络技术(北京)有限公司 Method, device, equipment and computer storage medium for generating voice packet
CN110781657A (en) * 2019-10-14 2020-02-11 百度在线网络技术(北京)有限公司 Management method, device and equipment for navigation broadcasting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUN-HUNG YANG; SHEUE-LING HWANG; JAN-LI WANG: "The design and evaluation of an auditory navigation system for blind and visually impaired", 《PROCEEDINGS OF THE 2014 IEEE 18TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN》 *
朱敏慧: "语音技术加速网联汽车个性化", 《汽车与配件》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112133281A (en) * 2020-09-15 2020-12-25 北京百度网讯科技有限公司 Voice broadcasting method and device, electronic equipment and storage medium
CN112269864A (en) * 2020-10-15 2021-01-26 北京百度网讯科技有限公司 Method, device and equipment for generating broadcast voice and computer storage medium
CN112269864B (en) * 2020-10-15 2023-06-23 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for generating broadcast voice

Also Published As

Publication number Publication date
CN111578965B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
US11348571B2 (en) Methods, computing devices, and storage media for generating training corpus
CN108959257B (en) Natural language parsing method, device, server and storage medium
EP3591338A2 (en) Navigation method, navigation device, device and medium
CN111694973A (en) Model training method and device for automatic driving scene and electronic equipment
CN111578965B (en) Navigation broadcast information processing method and device, electronic equipment and storage medium
CN110674241B (en) Map broadcasting management method and device, electronic equipment and storage medium
JP7483781B2 (en) Method, device, electronic device, computer-readable storage medium and computer program for pushing information - Patents.com
CN111324727A (en) User intention recognition method, device, equipment and readable storage medium
US20220199094A1 (en) Joint automatic speech recognition and speaker diarization
CN110097121B (en) Method and device for classifying driving tracks, electronic equipment and storage medium
CN107943834B (en) Method, device, equipment and storage medium for implementing man-machine conversation
EP3596615A1 (en) Adaptive interface in a voice-activated network
CN110472095A (en) Voice guide method, apparatus, equipment and medium
CN111951782A (en) Voice question and answer method and device, computer readable storage medium and electronic equipment
US20160224316A1 (en) Vehicle interface ststem
CN111770375A (en) Video processing method and device, electronic equipment and storage medium
CN111859181A (en) Cross-region travel recommendation method and device, electronic equipment and storage medium
CN111177462A (en) Method and device for determining video distribution timeliness
CN112269864A (en) Method, device and equipment for generating broadcast voice and computer storage medium
CN111982144A (en) Navigation method, navigation device, electronic equipment and computer readable medium
CN110781657A (en) Management method, device and equipment for navigation broadcasting
US10650803B2 (en) Mapping between speech signal and transcript
CN113658586A (en) Training method of voice recognition model, voice interaction method and device
CN111354334B (en) Voice output method, device, equipment and medium
CN112527235A (en) Voice playing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant