CN110726414B - Method and apparatus for outputting information - Google Patents

Method and apparatus for outputting information Download PDF

Info

Publication number
CN110726414B
CN110726414B CN201911024498.4A CN201911024498A CN110726414B CN 110726414 B CN110726414 B CN 110726414B CN 201911024498 A CN201911024498 A CN 201911024498A CN 110726414 B CN110726414 B CN 110726414B
Authority
CN
China
Prior art keywords
road section
information
user
road
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911024498.4A
Other languages
Chinese (zh)
Other versions
CN110726414A (en
Inventor
刘子昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911024498.4A priority Critical patent/CN110726414B/en
Publication of CN110726414A publication Critical patent/CN110726414A/en
Application granted granted Critical
Publication of CN110726414B publication Critical patent/CN110726414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the above method comprises: in response to the fact that the user drifts and the area where the user is located comprises the viaduct, determining road sections included in the area where the user is located to obtain a road section set; acquiring characteristic information of each road section in a road section set; inputting the characteristic information of every two road sections in the road section set into a pre-trained road section determination model, counting the marking result of the road section determination model aiming at each road section, wherein the road section determination model is used for marking each road section according to the input characteristic information of the two road sections; determining the information of the road section where the user is located according to the statistical result; and outputting the information of the road section where the user is located. According to the embodiment, the road section where the user is located in the elevated area can be determined according to the characteristic information of each road section in the elevated area and the road section determination model, and the accuracy of positioning after yawing in the elevated area is improved.

Description

Method and apparatus for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
An indispensable part in driving navigation is a road calculation function after yawing, a road network in an overhead area is complex, the directions and horizontal positions of an upper road and a lower road of an overhead area are almost overlapped, and positioning errors after yawing are vigorous, so that voice broadcasting, guidance and serious errors of a route are caused to a user, and the user detours, grooves are spitted, even rules and traffic accidents are caused due to the fact that errors cannot be corrected for a long time in driving navigation.
Since the upper and lower roads in the elevated area are parallel roads and the horizontal positions of the upper and lower roads are almost completely overlapped, it is easy to cause that the GPS system cannot distinguish between the upper road and the lower road. In addition, the GPS is prone to drift in the overhead area, which results in poor effect of the existing post-yaw positioning algorithm in the overhead area.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, including: in response to the fact that the user drifts and the area where the user is located comprises the viaduct, determining road sections included in the area where the user is located to obtain a road section set; acquiring characteristic information of each road section in the road section set; inputting the feature information of every two road segments in the road segment set into a pre-trained road segment determination model, and counting the marking result of the road segment determination model for each road segment, wherein the road segment determination model is used for marking each road segment according to the input feature information of the two road segments; determining the information of the road section where the user is located according to the statistical result; and outputting the information of the road section where the user is located.
In some embodiments, the above method further comprises: and replanning navigation information for the user according to the information of the road section where the user is located.
In some embodiments, the characteristic information of the road segment includes at least one of: attribute information, speed information, angle information, information on the yaw point of the user on the road section, forward driving information, and information output by a hidden Markov model.
In some embodiments, the marking result of the road segment includes 0 or 1; and the counting of the marking result of the road section determination model for each road section comprises the following steps: for each road segment, the number of times the road segment is marked as 1 is counted.
In some embodiments, the determining information of the road segment where the user is located according to the statistical result includes: taking the road segment marked as 1 with the most times as the road segment where the user is located; and taking the characteristic information of the road section as the information of the road section where the user is located.
In some embodiments, the road segment determination model is obtained by training the following steps: acquiring a training sample set, wherein the training sample comprises the characteristic information of a road section marked as 1 and the characteristic information of a road section marked as 0; and taking the feature information of the two road sections as input, taking the marks corresponding to the two road sections as expected output, and training to obtain the road section determination model.
In some embodiments, the training sample is obtained by: in response to receiving a yaw request sent by a user, determining whether the current position of the user is located in an elevated area; in response to determining that the current position of the user is located in the elevated area, determining a road section to which the user belongs according to the offline map, marking the road section to which the user belongs as 1, and marking other road sections in the elevated area as 0; and respectively combining the characteristic information of the road section marked as 1 with the characteristic information of the road section marked as 0 to obtain the training sample.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: the road section set determining unit is configured to determine road sections included in the area where the user is located to obtain a road section set in response to the fact that the user is detected to yaw and the area where the user is located includes the viaduct; a feature information acquisition unit configured to acquire feature information of each link in the link set; a marking result counting unit configured to input feature information of every two road segments in the road segment set into a pre-trained road segment determination model, and count a marking result of the road segment determination model for each road segment, the road segment determination model being used for marking each road segment according to the input feature information of the two road segments; a link information determination unit configured to determine information of a link where the user is located according to the statistical result; and a link information output unit configured to output information of a link where the user is located.
In some embodiments, the above apparatus further comprises: and the path planning unit is configured to re-plan navigation information for the user according to the information of the road section where the user is located.
In some embodiments, the characteristic information of the road segment includes at least one of: attribute information, speed information, angle information, information on the yaw point of the user on the road section, forward driving information, and information output by a hidden Markov model.
In some embodiments, the marking result of the road segment includes 0 or 1; and the above-mentioned marking result statistical unit is further configured to: for each road segment, the number of times the road segment is marked as 1 is counted.
In some embodiments, the road section information determining unit is further configured to: taking the road segment marked as 1 with the most times as the road segment where the user is located; and taking the characteristic information of the road section as the information of the road section where the user is located.
In some embodiments, the above apparatus further comprises: a training sample acquisition unit configured to acquire a set of training samples including feature information of a link marked as 1 and feature information of a link marked as 0; and the model training unit is configured to train the road section determination model by taking the characteristic information of the two road sections as input and taking the marks corresponding to the two road sections as expected output.
In some embodiments, the training sample acquisition unit is further configured to: in response to receiving a yaw request sent by a user, determining whether the current position of the user is located in an elevated area; in response to determining that the current position of the user is located in the elevated area, determining a road section to which the user belongs according to the offline map, marking the road section to which the user belongs as 1, and marking other road sections in the elevated area as 0; and respectively combining the characteristic information of the road section marked as 1 with the characteristic information of the road section marked as 0 to obtain the training sample.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method as described in any one of the embodiments of the first aspect.
According to the method and the device for outputting the information, provided by the above embodiments of the application, after the yaw of the user is detected and the area where the user is located includes the viaduct, the road segments included in the area where the user is located are determined, and the road segment set is obtained. Then, the feature information of each road section in the road section set is obtained. And inputting the characteristic information of every two road sections in the road section set into a pre-trained road section determination model, and then counting the output result of the road section determination model aiming at each road section. The road section determination model is used for determining the identification corresponding to each road section according to the input characteristic information of the two road sections. And determining the information of the road section where the user is located according to the output result. And finally, outputting the information of the road section where the user is located. According to the method, the road section where the user is located in the elevated area can be determined according to the characteristic information of each road section in the elevated area and the road section determination model, and the accuracy of positioning after yawing in the elevated area is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, in accordance with the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for outputting information according to the present application;
FIG. 4 is a flow diagram of another embodiment of a method for outputting information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, system architecture 100 may include a vehicle 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between vehicle 101 and server 103. Network 102 may include various wireless communication links.
A user may interact with the server 103 via the network 102 using an electronic device (e.g., a vehicle-mounted terminal) mounted on the vehicle 101 to receive or transmit messages or the like. Various electronic devices may be included on vehicle 101, such as an on-board terminal, microphone, speaker, GPS location device, and the like. Various communication client applications such as map navigation applications and voice broadcast applications can be installed on the vehicle-mounted terminal.
The in-vehicle terminal herein may be hardware or software. When the vehicle-mounted terminal is hardware, it may be various electronic devices having a display screen and supporting map navigation, including but not limited to a smart phone, a tablet computer, a laptop computer, and the like. When the in-vehicle terminal is software, the software can be installed in the electronic device listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may be a server that provides various services, such as a background map server that provides support for the travel path of the vehicle 101. The background map server may analyze and otherwise process data such as the received yaw request, and feed back a processing result (e.g., post-yaw positioning information) to the vehicle 101.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for outputting information provided in the embodiment of the present application may be executed by an in-vehicle terminal on the vehicle 101, or may be executed by the server 103. Accordingly, the apparatus for outputting information may be provided in the in-vehicle terminal of the vehicle 101, or may be provided in the server 103.
It should be understood that the number of vehicles, networks, and servers in FIG. 1 is merely illustrative. There may be any number of vehicles, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present application is shown. The method for outputting information of the embodiment comprises the following steps:
step 201, in response to detecting that the user is off course and the area where the user is located includes the viaduct, determining the road segments included in the area where the user is located, and obtaining a road segment set.
In the present embodiment, the execution subject of the method for outputting information (e.g., the in-vehicle terminal or server 103 on the vehicle 101 shown in fig. 1) may detect whether the user is yawing in various ways. For example, when the execution subject is an in-vehicle terminal on a vehicle, it may determine whether the user is off course by detecting whether the user's current position deviates from a planned navigation route. When the execution subject is a server, it may determine whether the user is currently yawing by detecting whether a yawing request of the user is received.
The execution body, after determining that the user is off course, may determine whether the area in which the user is located includes overpasses. Here, the area where the user is located may refer to a rectangular area or a circular area having a length of a preset value (e.g., 100 meters or 200 meters) with the user as a center. After the execution subject determines that the viaduct is included in the area, the road segments included in the area can be determined, and a road segment set is obtained. Here, the link may refer to a path that the user can travel in an area where the user is located.
Step 202, acquiring characteristic information of each road section in the road section set.
After determining the road segment set in the area where the user is located, the execution subject may obtain feature information of each road segment in the road segment set. The characteristic information may be used to indicate specific characteristics of the link, and may include, for example, a link identifier, a road attribute, and the like. In this embodiment, the execution subject may obtain the feature information of each road segment from a preset database, or may obtain the feature information from the corresponding electronic device according to the feature information that needs to be obtained.
In some optional implementations of this embodiment, the characteristic information may include at least one of: attribute information, speed information, angle information, information on the yaw point of the user on the road section, forward driving information, and information output by a hidden Markov model.
In this implementation manner, the attribute information may include speed limit information of the road, whether the road is closed, and the like. The speed information may be speed information, acceleration information, etc. of the user's travel. The angle information may include GPS ephemeris data (e.g., angle, number of satellites, etc.). The yaw point information of the user on the road section may include the position of the yaw point, the road condition information of the area where the yaw point is located, and the like. The forward traveling information may include a traveling direction and a traveling speed of the user, and the like. The information output by the hidden markov model may include a radiology probability, a transition probability, a viterbi probability, and the like. Here, the information input by the hidden markov model is the N GPS coordinates of the user before the user drifts away. Hidden markov models are well established models and will not be described in detail herein.
Step 203, inputting the feature information of every two road segments in the road segment set into a pre-trained road segment determination model, and counting the marking result of the road segment determination model for each road segment.
After the execution main body obtains the feature information of each road section, each two of the road sections in the road section set can be combined to obtain the combination of every two road sections. And then inputting the characteristic information of the two road sections in each combination into a pre-trained road section determination model. In this embodiment, the road section determining model may determine the mark corresponding to each road section according to the input feature information of the two road sections. The above-described link determination model may be a machine learning model, and may be a binary model, for example. The two classification models can mark the two road sections respectively to obtain a marking result corresponding to each road section. The above-mentioned flag may be 0 or 1. A0 indicates that the user has a low probability of being on the link after the user has drifted away, and a 1 indicates that the user has a high probability of being on the link after the user has drifted away.
The section determination model after determining the signs of the input two sections, the execution principal may count the number of times each section is marked as 1.
In some optional implementations of the present embodiment, the marking result of the road segment includes 0 or 1. The step 203 may specifically include: for each road segment, the number of times the road segment is marked as 1 is counted.
And step 204, determining the information of the road section where the user is located according to the statistical result.
The execution subject may determine information of the road segment where the user is located according to the number of times each road segment is marked as 1 or 0. For example, the executing agent may determine the road segment marked 1 the most times as the road segment where the user is located. The information of the link may be feature information of the link.
In some optional implementations of this embodiment, the execution principal may take the road segment labeled as 1 the most frequently as the road segment where the user is located; and taking the characteristic information of the road section as the information of the road section where the user is located.
And step 205, outputting information of the road section where the user is located.
After determining information of the road segment where the user is located, the executing entity may output the information for further processing.
In some optional implementations of this embodiment, the method may further include the following steps not shown in fig. 2: and replanning navigation information for the user according to the information of the road section where the user is located.
After determining the road segment information where the user is located, the executive body may re-plan the navigation information for the user. Specifically, the executive agent may use the end point before yawing as the end point when the path is re-planned, and use the road segment where the user is located as the start road segment, so as to obtain the re-planned navigation information.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, after detecting that the user is yawing, the vehicle-mounted terminal of the vehicle may detect whether the area where the user is located includes the viaduct. And if yes, executing the processing of the steps 201-205 to obtain the road section information of the user, and further replanning the navigation information for the user.
According to the method for outputting the information provided by the above embodiment of the application, after the user yaw is detected and the area where the user is located comprises the viaduct, the road segments included in the area where the user is located are determined, and the road segment set is obtained. Then, the feature information of each road section in the road section set is obtained. And inputting the characteristic information of every two road sections in the road section set into a pre-trained road section determination model, and then counting the output result of the road section determination model aiming at each road section. The road section determination model is used for determining the identification corresponding to each road section according to the input characteristic information of the two road sections. And determining the information of the road section where the user is located according to the output result. And finally, outputting the information of the road section where the user is located. According to the method, the road section where the user is located in the elevated area can be determined according to the characteristic information of each road section in the elevated area and the road section determination model, and the accuracy of positioning after yawing in the elevated area is improved.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for outputting information in accordance with the present application is shown. As shown in fig. 4, the method for outputting information of the present embodiment may include the following steps:
step 401, a training sample set is obtained.
In this embodiment, the execution principal may first train the section determination model using the training sample set. The training sample may include feature information of a segment labeled 1 and feature information of a segment labeled 0.
In some optional implementations of this embodiment, the execution subject may obtain the training sample by the following steps not shown in fig. 4: in response to receiving a yaw request sent by a user, determining whether the current position of the user is located in an elevated area; in response to determining that the current position of the user is located in the elevated area, determining a road section to which the user belongs according to the offline map, marking the road section to which the user belongs as 1, and marking other road sections in the elevated area as 0; and respectively combining the characteristic information of the road section marked as 1 with the characteristic information of the road section marked as 0 to obtain the training sample.
In this implementation, the execution body may receive a yaw request sent by a user. Upon receiving the yaw request, it may be determined whether the current position of the user is located in the overhead area. And if the current position of the user is located in the elevated area, determining the road section to which the user belongs according to the off-line map. Here, the execution body may continue to acquire the position information of the user after receiving the yaw request sent by the user. And matching the position information of the user in a period of time with an off-line map so as to determine the road section to which the user belongs. The executive agent may mark the road segment to which the user belongs as 1 and other road segments in the elevated area as 0. Then, the feature information of the road segment marked as 1 is combined with the feature information of the road segment marked as 0, respectively, to obtain a training sample.
And step 402, taking the feature information of the two road sections as input, taking the marks corresponding to the two road sections as expected output, and training to obtain a road section determination model.
The execution subject may first establish an initial road section determination model, after obtaining a training sample set, may take part or all of the training samples in the training sample set as inputs of the initial road section determination model in sequence, take the labels corresponding to the two road sections as expected outputs, train the initial road section determination model, and obtain a road section determination model.
In step 403, in response to detecting that the user is off course and the area where the user is located includes the viaduct, determining the road segments included in the area where the user is located, and obtaining a road segment set.
Step 404, acquiring feature information of each road section in the road section set.
Step 405, inputting the feature information of every two road segments in the road segment set into a pre-trained road segment determination model, and counting the marking results of the road segment determination model for each road segment.
And 406, determining the information of the road section where the user is located according to the statistical result.
Step 407, outputting information of the road section where the user is located.
The principle of steps 403 to 407 is similar to that of steps 201 to 205, and is not described herein again.
The method for outputting information provided by the above embodiment of the present application may determine, by combining with offline map data, a road segment to which a user belongs when the user is off course, and use the road segment as a true value of a training sample, so as to train and obtain a road segment determination model.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a link set determination unit 501, a feature information acquisition unit 502, a marking result statistics unit 503, a link information determination unit 504, and a link information output unit 505.
The link set determining unit 501 is configured to determine the links included in the area where the user is located to obtain the link set in response to detecting that the user is yawing and the area where the user is located includes the overpass.
A feature information obtaining unit 502 configured to obtain feature information of each link in the link set.
A marking result counting unit 503 configured to input feature information of every two road segments in the road segment set into a road segment determination model trained in advance, and count the marking result of the road segment determination model for each road segment. The road section determination model is used for marking each road section according to the input characteristic information of the two road sections.
A link information determination unit 504 configured to determine information of a link where the user is located according to the statistical result.
A link information output unit 505 configured to output information of a link where a user is located.
In some optional implementations of this embodiment, the apparatus 500 may further include a path planning unit, not shown in fig. 5, configured to re-plan the navigation information for the user according to the information of the road segment where the user is located.
In some optional implementations of this embodiment, the characteristic information of the road segment includes at least one of: attribute information, speed information, angle information, information on the yaw point of the user on the road section, forward driving information, and information output by a hidden Markov model.
In some optional implementations of this embodiment, the marking result of the road segment includes 0 or 1; and the marking result statistics unit 503 may be further configured to: for each road segment, the number of times the road segment is marked as 1 is counted.
In some optional implementations of the present embodiment, the road segment information determining unit 504 may be further configured to: taking the road segment marked as 1 with the most times as the road segment where the user is located; and taking the characteristic information of the road section as the information of the road section where the user is located.
In some optional implementations of this embodiment, the apparatus 500 may further include a training sample obtaining unit and a model training unit, which are not shown in fig. 5.
A training sample acquisition unit configured to acquire a set of training samples. The training sample includes feature information for the road segment labeled 1 and feature information for the road segment labeled 0.
And the model training unit is configured to take the characteristic information of the two road sections as input, take the marks corresponding to the two road sections as expected output, and train to obtain the road section determination model.
In some optional implementations of this embodiment, the training sample obtaining unit may be further configured to: in response to receiving a yaw request sent by a user, determining whether the current position of the user is located in an elevated area; in response to determining that the current position of the user is located in the elevated area, determining a road section to which the user belongs according to the offline map, marking the road section to which the user belongs as 1, and marking other road sections in the elevated area as 0; and respectively combining the characteristic information of the road section marked as 1 with the characteristic information of the road section marked as 0 to obtain the training sample.
It should be understood that units 501 to 505, which are described in the apparatus 500 for outputting information, correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for outputting information are equally applicable to the apparatus 500 and the units included therein and will not be described again here.
Referring now to FIG. 6, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to the fact that the user drifts and the area where the user is located comprises the viaduct, determining road sections included in the area where the user is located to obtain a road section set; acquiring characteristic information of each road section in a road section set; inputting the characteristic information of every two road sections in the road section set into a pre-trained road section determination model, counting the marking result of the road section determination model aiming at each road section, wherein the road section determination model is used for marking each road section according to the input characteristic information of the two road sections; determining the information of the road section where the user is located according to the statistical result; and outputting the information of the road section where the user is located.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a link set determination unit, a feature information acquisition unit, a marking result statistic unit, a link information determination unit, and a link information output unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the link information output unit may also be described as a "unit that outputs information of a link where the user is located".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (16)

1. A method for outputting information, comprising:
in response to the fact that the user drifts and the area where the user is located comprises the viaduct, determining road sections included in the area where the user is located to obtain a road section set;
acquiring characteristic information of each road section in the road section set;
inputting the characteristic information of every two road sections in the road section set into a pre-trained road section determination model, and counting the marking result of the road section determination model aiming at each road section to obtain a counting result, wherein the road section determination model is used for marking each road section according to the input characteristic information of the two road sections, and the marking result of each road section represents the probability that a user is located at the road section after yawing;
determining the information of the road section where the user is located according to the statistical result;
and outputting the information of the road section where the user is located.
2. The method of claim 1, wherein the method further comprises:
and replanning navigation information for the user according to the information of the road section where the user is located.
3. The method of claim 1, wherein the characteristic information of the road segment comprises at least one of:
attribute information, speed information, angle information, information on the yaw point of the user on the road section, forward driving information, and information output by a hidden Markov model.
4. The method of claim 1, wherein the marking result of the road segment comprises 0 or 1; and
the step of counting the marking results of the road section determination model for each road section comprises the following steps:
for each road segment, the number of times the road segment is marked as 1 is counted.
5. The method of claim 4, wherein the determining information of the road segment where the user is located according to the statistical result comprises:
taking the road segment marked as 1 with the most times as the road segment where the user is located;
and taking the characteristic information of the road section as the information of the road section where the user is located.
6. The method of claim 1, wherein the segment determination model is trained by:
acquiring a training sample set, wherein the training sample comprises the characteristic information of a road section marked as 1 and the characteristic information of a road section marked as 0;
and taking the characteristic information of the two road sections as input, taking the marks corresponding to the two road sections as expected output, and training to obtain the road section determination model.
7. The method of claim 6, wherein the training samples are obtained by:
in response to receiving a yaw request sent by a user, determining whether the current position of the user is located in an elevated area;
in response to determining that the current position of the user is located in the elevated area, determining a road section to which the user belongs according to the offline map, marking the road section to which the user belongs as 1, and marking other road sections in the elevated area as 0;
and respectively combining the characteristic information of the road section marked as 1 with the characteristic information of the road section marked as 0 to obtain the training sample.
8. An apparatus for outputting information, comprising:
the road section set determining unit is configured to determine road sections included in the area where the user is located to obtain a road section set in response to the fact that the user is detected to yaw and the area where the user is located includes the viaduct;
a feature information acquisition unit configured to acquire feature information of each link in the link set;
the marking result counting unit is configured to input the characteristic information of every two road sections in the road section set into a pre-trained road section determining model, count the marking result of the road section determining model for each road section to obtain a counting result, the road section determining model is used for marking each road section according to the input characteristic information of the two road sections, and the marking result of each road section represents the probability that the user is located on the road section after yawing;
a road section information determining unit configured to determine information of a road section where the user is located according to the statistical result;
a link information output unit configured to output information of a link where the user is located.
9. The apparatus of claim 8, wherein the apparatus further comprises:
and the path planning unit is configured to re-plan navigation information for the user according to the information of the road section where the user is located.
10. The apparatus of claim 8, wherein the characteristic information of the road segment comprises at least one of:
attribute information, speed information, angle information, information on the yaw point of the user on the road section, forward driving information, and information output by a hidden Markov model.
11. The apparatus of claim 8, wherein the marking result of the section of road comprises 0 or 1; and
the marking result statistic unit is further configured to:
for each road segment, the number of times the road segment is marked as 1 is counted.
12. The apparatus of claim 11, wherein the road segment information determination unit is further configured to:
taking the road segment marked as 1 with the most times as the road segment where the user is located;
and taking the characteristic information of the road section as the information of the road section where the user is located.
13. The apparatus of claim 8, wherein the apparatus further comprises:
a training sample acquisition unit configured to acquire a set of training samples including feature information of a road segment labeled as 1 and feature information of a road segment labeled as 0;
and the model training unit is configured to take the characteristic information of the two road sections as input, take the marks corresponding to the two road sections as expected output, and train to obtain the road section determination model.
14. The apparatus of claim 13, wherein the training sample acquisition unit is further configured to:
in response to receiving a yaw request sent by a user, determining whether the current position of the user is located in an elevated area;
in response to determining that the current position of the user is located in the elevated area, determining a road section to which the user belongs according to the offline map, marking the road section to which the user belongs as 1, and marking other road sections in the elevated area as 0;
and respectively combining the characteristic information of the road section marked as 1 with the characteristic information of the road section marked as 0 to obtain the training sample.
15. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201911024498.4A 2019-10-25 2019-10-25 Method and apparatus for outputting information Active CN110726414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911024498.4A CN110726414B (en) 2019-10-25 2019-10-25 Method and apparatus for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911024498.4A CN110726414B (en) 2019-10-25 2019-10-25 Method and apparatus for outputting information

Publications (2)

Publication Number Publication Date
CN110726414A CN110726414A (en) 2020-01-24
CN110726414B true CN110726414B (en) 2021-07-27

Family

ID=69223128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911024498.4A Active CN110726414B (en) 2019-10-25 2019-10-25 Method and apparatus for outputting information

Country Status (1)

Country Link
CN (1) CN110726414B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199262A (en) * 2020-08-28 2022-03-18 阿里巴巴集团控股有限公司 Method for training position recognition model, position recognition method and related equipment
CN111986487B (en) * 2020-09-11 2022-02-25 腾讯科技(深圳)有限公司 Road condition information management method and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104990558A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Vehicle navigation method and system
CN106248096A (en) * 2016-09-29 2016-12-21 百度在线网络技术(北京)有限公司 The acquisition methods of road network weight and device
CN109446973A (en) * 2018-10-24 2019-03-08 中车株洲电力机车研究所有限公司 A kind of vehicle positioning method based on deep neural network image recognition
CN109766777A (en) * 2018-12-18 2019-05-17 东软集团股份有限公司 Detection method, device, storage medium and the electronic equipment of abnormal track
CN110097121A (en) * 2019-04-30 2019-08-06 北京百度网讯科技有限公司 A kind of classification method of driving trace, device, electronic equipment and storage medium
US10473467B2 (en) * 2016-05-17 2019-11-12 Mitac International Corp. Method for determining at which level a vehicle is when the vehicle is in a multi-level road system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104990558A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Vehicle navigation method and system
US10473467B2 (en) * 2016-05-17 2019-11-12 Mitac International Corp. Method for determining at which level a vehicle is when the vehicle is in a multi-level road system
CN106248096A (en) * 2016-09-29 2016-12-21 百度在线网络技术(北京)有限公司 The acquisition methods of road network weight and device
CN109446973A (en) * 2018-10-24 2019-03-08 中车株洲电力机车研究所有限公司 A kind of vehicle positioning method based on deep neural network image recognition
CN109766777A (en) * 2018-12-18 2019-05-17 东软集团股份有限公司 Detection method, device, storage medium and the electronic equipment of abnormal track
CN110097121A (en) * 2019-04-30 2019-08-06 北京百度网讯科技有限公司 A kind of classification method of driving trace, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110726414A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
US9494694B1 (en) Method and apparatus of road location inference for moving object
CN109141464B (en) Navigation lane change prompting method and device
US10401188B2 (en) Method and apparatus for providing point of interest information
CN107883974B (en) Navigation path planning method, navigation server and computer readable medium
CN110689804B (en) Method and apparatus for outputting information
CN111862605B (en) Road condition detection method and device, electronic equipment and readable storage medium
CN113934775B (en) Vehicle track map matching method, device, equipment and computer readable medium
CN106855878B (en) Historical driving track display method and device based on electronic map
CN111380546A (en) Vehicle positioning method and device based on parallel road, electronic equipment and medium
CN112590813A (en) Method, apparatus, electronic device, and medium for generating information of autonomous vehicle
CN110726414B (en) Method and apparatus for outputting information
CN112539754B (en) RDS-TMC-based high-precision map and traditional map path matching method and device
JP2016071442A (en) Reliability determination method of map matching result of probe data, device and program
JP2013205177A (en) Travel direction prediction device, travel direction prediction method and program
CN110542425B (en) Navigation path selection method, navigation device, computer equipment and readable medium
JP2010019588A (en) Vehicle navigation system and correction method of position information in vehicle navigation system, and information distribution server and in-vehicle navigation apparatus
JP2014162458A (en) System, method and program for identifying transportation
CN115657684B (en) Vehicle path information generation method, device, equipment and computer readable medium
CN109556614B (en) Positioning method and device for unmanned vehicle
CN113008246B (en) Map matching method and device
CN114862491A (en) Vehicle position determining method, order dispatching method, device, server and storage medium
CN115033807A (en) Recommendation method, device and equipment for future departure and storage medium
CN112857380B (en) Method and device for determining road traffic state, storage medium and electronic equipment
WO2022031222A1 (en) Processing appratus and method for generating route navigation data
CN109297480B (en) Method and system for managing location of device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant