CN111062320A - Viaduct bridge identification method and related product - Google Patents

Viaduct bridge identification method and related product Download PDF

Info

Publication number
CN111062320A
CN111062320A CN201911296491.8A CN201911296491A CN111062320A CN 111062320 A CN111062320 A CN 111062320A CN 201911296491 A CN201911296491 A CN 201911296491A CN 111062320 A CN111062320 A CN 111062320A
Authority
CN
China
Prior art keywords
result
determining
viaduct
preset
identification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911296491.8A
Other languages
Chinese (zh)
Other versions
CN111062320B (en
Inventor
彭冬炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911296491.8A priority Critical patent/CN111062320B/en
Publication of CN111062320A publication Critical patent/CN111062320A/en
Application granted granted Critical
Publication of CN111062320B publication Critical patent/CN111062320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a viaduct bridge identification method and a related product, and is characterized in that the viaduct bridge identification method is applied to electronic equipment, and the electronic equipment comprises the following components: a sensor module, the method comprising: acquiring a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane. The embodiment of the application has the advantage of improving the identification accuracy of the viaduct scene.

Description

Viaduct bridge identification method and related product
Technical Field
The application relates to the technical field of electronics, in particular to a viaduct identification method and a related product.
Background
With the improvement of the living standard of residents, automobiles become important transportation means in daily life, and the demand for automobile navigation is continuously increased.
In the process of driving an automobile, a mobile terminal is generally used for map navigation, and when a common viaduct scene in an urban road is identified, the identification is usually carried out through satellite data, however, the satellite data is easily interfered by the environment, the reliability of the data is not high, and therefore the identification accuracy of the viaduct scene is low, and the user experience is not high.
Disclosure of Invention
The embodiment of the application provides an viaduct identification method and a related product, which are used for identifying viaduct scenes according to lane videos and sensing data, and are beneficial to improving the viaduct scene identification accuracy and improving the user experience.
In a first aspect, an embodiment of the present application provides an identification method for an overpass, which is applied to an electronic device, where the electronic device includes: a sensor module, the method comprising:
acquiring a lane video of a current driving lane;
determining a first recognition result of the current driving lane according to the lane video;
receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data;
and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane.
In a second aspect, an embodiment of the present application provides an overpass identification apparatus, which is applied to an electronic device, where the electronic device includes: a sensor module, the device comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a lane video of a current driving lane;
the determining unit is used for determining a first recognition result of the current driving lane according to the lane video;
the receiving unit is used for receiving the sensing data acquired by the sensor module and determining a second recognition result of the current driving lane according to the sensing data;
and the comparison unit is used for determining that the current driving lane is the viaduct lane if the first identification result is successfully compared with the second identification result.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing some or all of the steps described in the method according to the first aspect of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement part or all of the steps described in the method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, the electronic device acquires a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane. Therefore, the viaduct scene can be identified according to the lane videos and the sensing data, the viaduct scene identification accuracy is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a viaduct identification method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another viaduct identification method according to an embodiment of the present application;
fig. 4 is a human-computer interaction diagram of another viaduct identification method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an overpass identification device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Electronic devices may include a variety of handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices communicatively connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application, where the electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, where:
the electronic device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in storage and processing circuitry 110 may be used to control the operation of electronic device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the electronic device 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functionality, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100, to name a few.
The electronic device 100 may include input-output circuitry 150. The input-output circuit 150 may be used to enable the electronic device 100 to input and output data, i.e., to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to the external device. The input-output circuit 150 may further include a sensor 170. The sensors 170 may include ambient light sensors, proximity sensors based on light and capacitance, fingerprint recognition modules, touch sensors (e.g., based on light touch sensors and/or capacitive touch sensors, wherein the touch sensors may be part of a touch display screen or may be used independently as a touch sensor structure), such as acceleration sensors and other sensors.
The electronic device 100 may further include a camera 140, the camera 140 including: infrared camera, color image camera and so on, the camera can be leading camera or rear camera, and the fingerprint identification module can be integrated in the display screen below for gather the fingerprint image, the fingerprint identification module can be following at least one: optical fingerprint identification module, or ultrasonic fingerprint identification module etc. do not do the restriction here. The front camera can be arranged below the front display screen, and the rear camera can be arranged below the rear display screen. Of course, the front camera or the rear camera may not be integrated with the display screen, and certainly in practical applications, the front camera or the rear camera may also be a lifting structure.
Input-output circuit 150 may also include one or more display screens, and when multiple display screens are provided, such as 2 display screens, one display screen may be provided on the front of the electronic device and another display screen may be provided on the back of the electronic device, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, organic light emitting diode display, electronic ink display, plasma display, display using other display technologies. The display screen 130 may include a barometric pressure sensor, a global navigation satellite positioning (GNSS) sensor, and may also include an array of touch sensors (i.e., the display screen 130 may be a touch-sensitive display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The communication circuit 120 may be used to provide the electronic device 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. The communication circuit 120 may include a first Wi-Fi channel and a second Wi-Fi channel, where the first Wi-Fi channel and the second Wi-Fi channel operate simultaneously to implement dual Wi-Fi functionality. For example, the wireless communication circuitry in communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The electronic device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control the operation of electronic device 100, and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from electronic device 100.
The electronic device described above with reference to fig. 1 may be configured to implement the following functions:
acquiring a lane video of a current driving lane;
determining a first recognition result of the current driving lane according to the lane video;
receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data;
and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a method for identifying a overpass according to an embodiment of the present application, applied to the electronic device depicted in fig. 1, where the electronic device includes: the identification method of the viaduct comprises the following steps:
step 201, obtaining a lane video of a current driving lane;
optionally, when detecting that the target vehicle is started, the electronic device is started to correspond to the camera module, and the camera module is used for acquiring a lane video corresponding to a current driving lane of the target vehicle.
And the lane video is video data in the driving direction of the target vehicle.
Step 202, determining a first recognition result of the current driving lane according to the lane video;
optionally, a preset image recognition algorithm is obtained, the image recognition algorithm is executed on the lane video, and a first recognition result corresponding to the current driving lane is obtained, where the first recognition result may include: the first preset result or the second preset result.
Optionally, a pre-trained image recognition model is obtained, the vehicle-road video is used as an input of the image recognition model, and a first recognition result corresponding to the current driving lane is obtained.
Step 203, receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data;
optionally, obtaining a first recognition result, and receiving sensing data collected by the sensor module according to the first recognition result includes: judging whether a first identification result comprises a first preset result, if so, generating an air pressure change value acquisition instruction and a global navigation satellite positioning (GNSS) data acquisition instruction, generating a first sensing data acquisition instruction according to the air pressure change value acquisition instruction and the GNSS data acquisition instruction, and sending the first sensing data acquisition instruction to the sensor module; and if the first identification result does not comprise a first preset result, judging whether the first identification result comprises a second preset result, and if the first identification result comprises the second preset result, generating an air pressure change value acquisition instruction. And generating a second sensing data acquisition instruction according to the air pressure change value acquisition instruction, and sending the second sensing data acquisition instruction to the sensor module.
Further, a sensing data acquisition response returned by the sensor module is received, and the sensing data is extracted from the sensing data acquisition response, wherein the sensing data may include: and acquiring a preset judgment model by using the first sensing data or the second sensing data, and taking the sensing data as the input of the judgment model to obtain a second recognition result corresponding to the current driving lane.
And 204, if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane.
Optionally, if the first recognition result includes a first preset result, the second recognition result may include: the barometric pressure identification result and the GNSS identification result may include: satellite number identification results and satellite signal-to-noise ratio identification results; if the first recognition result includes a second preset result, the second recognition result may include: and (5) identifying the air pressure. Optionally, if the first identification result includes a first preset result, extracting an air pressure identification result and a GNSS identification result from the second identification result, determining whether the air pressure identification result includes the first preset result, and if the air pressure identification result does not include the first preset result, comparing the first identification result with the second identification result unsuccessfully; if the air pressure identification result comprises the first preset result, acquiring a satellite quantity identification result from the GNSS identification result, judging whether the satellite quantity identification result comprises the first preset result, and if the satellite quantity identification result does not comprise the first preset result, comparing the first identification result with the second identification result unsuccessfully; if the satellite identification result comprises the first preset result, acquiring a satellite signal-to-noise ratio identification result from the GNSS identification result, judging whether the satellite signal-to-noise ratio identification result comprises the first preset result, if the satellite signal-to-noise ratio identification result does not comprise the first preset result, determining that the first identification result and the second identification result are not successfully compared, if the satellite signal-to-noise ratio identification result comprises the first preset result, determining that the first identification result and the second identification result are successfully compared, and determining that the current driving lane is the viaduct lane.
Optionally, after determining that the current driving lane is the viaduct lane, obtaining an air pressure change value, determining whether the air pressure change value is greater than 0, and if the air pressure change value is greater than 0, determining that the target vehicle is in a state of exiting the viaduct; and if the air pressure change value is less than 0, determining that the target vehicle is in a state of driving into the viaduct.
Optionally, after determining that the current driving lane is a viaduct lane, obtaining a satellite number change value, judging whether the satellite number change value is greater than 0, and if the satellite number change value is greater than 0, determining that the target vehicle is in a state of driving into the viaduct; and if the satellite quantity variation value is less than 0, determining that the target vehicle is in a state of exiting the viaduct.
In a possible example, the determining a first recognition result of the current driving lane according to the lane video includes: acquiring m frames of images contained in the lane video, wherein m is an integer larger than 0; executing an image recognition algorithm on the m frames of images to acquire n frames of viaduct images containing viaduct lanes in the m frames of images, wherein n is an integer greater than or equal to 0 and less than or equal to m; calculating the m frames of images and the n frames of viaduct images according to a preset scene proportion calculation formula to obtain the viaduct scene proportion; judging whether the viaduct scene proportion is greater than a preset scene proportion threshold value or not, if so, calculating the n frames of viaduct images according to a preset occlusion rate algorithm, and determining n viaduct occlusion rates corresponding to the n frames of viaduct images; determining an average viaduct shielding rate according to the n viaduct shielding rates, and judging whether the average viaduct shielding rate is greater than a preset shielding rate threshold value or not; if the average viaduct shielding rate is not greater than the shielding rate threshold value, determining that the first identification result is a first preset result; otherwise, determining the first recognition result as a second preset result.
Wherein the scene scale threshold may include: 50%, 60%, 70%, etc., without limitation.
Wherein the occlusion rate threshold may include: 60%, 65%, 70%, etc., without limitation.
Optionally, m frames of images included in the lane video are acquired, where any one of the m frames of images includes: a current lane, a current lane line, and an adjacent lane environment; executing an image recognition algorithm on the m frames of images, wherein the image recognition algorithm is used for recognizing the viaduct lanes in the images, obtaining n frames of viaduct images, obtaining a preset viaduct scene proportion calculation formula, calculating aiming at the n frames of viaduct images and the m frames of images, and determining the viaduct scene proportion, wherein the viaduct scene proportion calculation formula can comprise: a is 100%, wherein a is the viaduct scene ratio, judging whether the viaduct scene ratio a is larger than a preset scene ratio threshold value, if so, acquiring n viaduct shielding rates of n viaduct images, wherein the viaduct shielding rates indicate the proportion of pixels shielded by viaducts to the viaduct pixels, calculating the average viaduct shielding rate of the n viaduct shielding rates, judging whether the average viaduct shielding rate is larger than a preset shielding rate threshold value, if so, determining that the first identification result is the first preset result, if not, determining that the first identification result is the second preset result, judging whether the average viaduct shielding rate is larger than a first condition threshold value, if not, determining that the second preset result meets the first condition, if so, the average viaduct shielding rate is larger than the first condition threshold value, determining that the second preset result meets a second condition; if the viaduct scene ratio is smaller than the scene ratio threshold, judging whether the viaduct scene ratio is smaller than a second condition threshold, if not, judging that the first identification result is a second preset result and meets a first condition, and if so, determining that the first identification result is a second preset result and meets a second condition.
In a possible example, the receiving sensing data collected by the sensor module includes: if the first recognition result comprises the first preset result, receiving first sensing data acquired by the sensor module, wherein the first sensing data comprises: the air pressure change value and global navigation satellite positioning GNSS data; if the first identification result comprises the second preset result, second sensing data collected by the sensor module are received, wherein the second sensing data comprise: the air pressure change value.
The first preset result indicates that the current lane is a non-shielded viaduct lane, and the second preset result indicates that the current lane is a non-viaduct lane or the current lane is a shielded viaduct lane.
The air pressure change value is the difference between first air pressure acquired at a first time corresponding to the lane video and second air pressure acquired at a second time corresponding to the lane video;
wherein the GNSS data comprises: the lane video processing method comprises the steps that a satellite quantity change value and a satellite signal-to-noise ratio change value are obtained, wherein the satellite quantity change value is the difference between a first satellite quantity acquired at a first time corresponding to the lane video and a second satellite quantity acquired at a second time corresponding to the lane video; the satellite signal-to-noise ratio variation value is the difference between a first satellite signal-to-noise ratio acquired at a first time corresponding to the lane video and a second satellite signal-to-noise ratio acquired at a second time corresponding to the lane video.
In a possible example, the determining a second recognition result of the current driving lane from the sensed data includes: if the sensing data comprises the first sensing data, determining an air pressure identification result according to the air pressure change value, determining a GNSS identification result according to the GNSS data, and determining the second identification result according to the air pressure identification result and the GNSS identification result; and if the sensing data comprises the second sensing data, determining an air pressure identification result according to the air pressure change value, and determining the second identification result according to the air pressure identification result.
Wherein, the air pressure identification result may include: a first preset result and a second preset result; the GNSS identification result may include: the satellite quantity recognition result and the satellite signal-to-noise ratio recognition result may include: the first preset result and the second preset result, the signal-to-noise ratio identification result of the satellite may include: a first preset result and a second preset result. In a possible example, the determining the air pressure identification result according to the air pressure variation value includes: acquiring a calculation formula of the air pressure change value and a preset height change value; the air pressure change value is used as the input of the height change value calculation formula, and the height change value corresponding to the air pressure change value is determined; judging whether the height change value is greater than a preset height change threshold value or not, and if the height change value is greater than the height change threshold value, determining that the air pressure identification result is the first preset result; otherwise, determining that the air pressure identification result is the second preset result.
Optionally, an air pressure change value x is obtained, an absolute value | x | of the air pressure change value x is calculated, a preset air pressure change value threshold is obtained, whether the absolute value | x | is greater than the air pressure change value threshold is judged, if the absolute value | x | is greater than the air pressure change value threshold, the air pressure identification result is determined to be a first preset result, and if the absolute value | x | is not greater than the air pressure change value threshold, the air pressure identification result is determined to be a second preset result.
Optionally, obtainTaking the air pressure change value x, calculating an absolute value | x | of the air pressure change value x, obtaining the height change value calculation formula, and substituting the absolute value | x | into the height change value calculation formula to obtain a height change value h, wherein the height change value calculation formula may include: h 44300 (1- | x |/p)0)1/5.256Wherein p is0Is a standard atmospheric pressure value.
Wherein the height variation threshold may include: 5 meters, 10 meters, 15 meters, etc., without limitation.
In one possible example, the determining the GNSS identification result from the GNSS data includes: the GNSS data includes: the GNSS identification result includes: satellite number identification results and satellite signal-to-noise ratio identification results; judging whether the satellite quantity variation value is larger than a preset quantity variation threshold value or not, if so, determining that the satellite quantity identification result is the first preset result, otherwise, determining that the satellite quantity identification result is the second preset result; judging whether the satellite signal-to-noise ratio change value is larger than a preset signal-to-noise ratio change threshold value, if so, distinguishing the satellite signal-to-noise ratio identification result as the first preset result, otherwise, confirming the satellite signal-to-noise ratio identification result as the second preset result.
Wherein the number change threshold may include: 10. 15, 20, etc., without limitation.
Wherein the snr variation threshold may include: 5. 10, 15, etc., without limitation.
The first preset result is used for indicating that the current lane is a viaduct lane, and the second preset result is used for indicating that the current lane is a non-viaduct lane.
In a possible example, if the first recognition result is successfully compared with the second recognition result, the method includes: judging whether the first identification result is consistent with the second identification result, and if so, determining that the first identification result is successfully compared with the second identification result; and if the first recognition result is inconsistent with the second recognition result, determining that the first recognition result is not successfully compared with the second recognition result.
Optionally, if the first identification result includes a first preset result, a preset first comparison rule is obtained, and the first identification result and the second identification result are compared according to the first comparison rule, where the first comparison rule may include: acquiring an air pressure identification result, a satellite quantity identification result and a satellite signal-to-noise ratio identification result from a second identification result, judging whether the first identification result, the air pressure identification result, the satellite quantity identification result and the satellite signal-to-noise ratio identification result all contain a first preset result, if the first identification result, the air pressure identification result, the satellite quantity identification result and the satellite signal-to-noise ratio identification result all contain the first preset result, determining that the first identification result is consistent with the second identification result, determining that the first identification result and the second identification result are successfully compared, and if at least one of the air pressure identification result, the satellite quantity identification result and the satellite signal-to-noise ratio identification result does not contain the first preset result, determining that the first identification result is inconsistent with the second identification result, and determining that the first identification result and the second identification result are unsuccessfully compared.
Optionally, if the first recognition result includes a second preset result, analyzing the second preset result, if the second preset result meets a first condition, extracting an air pressure recognition result from the second recognition result, determining whether the air pressure recognition result includes the first preset result, if the air pressure recognition result includes the first preset result, determining that the first recognition result and the second recognition result are successfully compared, and determining that the current driving lane is the viaduct lane; and if the air pressure identification result does not contain the first preset result, determining that the comparison between the first identification result and the second identification result is unsuccessful.
Further, if the second preset result meets a second condition, it is determined that the comparison between the first recognition result and the second recognition result is unsuccessful.
In combination with the above example, the following is an example, and it is assumed that the first recognition result includes a first preset result, and the first preset result may include: the second preset result may include: the method comprises the steps of obtaining a first preset result, wherein the first preset result meets a first condition if the first preset result is the shielded viaduct scene, and determining that the first preset result meets a second condition if the first preset result is the non-viaduct scene; when the first identification result is a first preset result, acquiring a second identification result, and judging whether the air pressure identification result, the satellite quantity identification result and the satellite signal-to-noise ratio identification result are the first preset results, if so, successfully comparing the first identification result with the second identification result, and if at least one second preset result exists in the air pressure identification result, the satellite quantity identification result and the satellite signal-to-noise ratio identification result, unsuccessfully comparing the first identification result with the second identification result; when the first identification result is a second preset result and meets a first condition, namely the first identification result is a scene for shielding the viaduct, judging whether the air pressure identification result comprises the first preset result, if so, successfully comparing the first identification result with the second identification result, and if not, unsuccessfully comparing the first identification result with the second identification result; and when the first recognition result is a second preset result and meets the first person condition, the first recognition result and the second recognition result are not successfully compared.
It can be seen that, in the embodiment of the application, the electronic device acquires a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane. Therefore, the viaduct scene can be identified according to the lane videos and the sensing data, the viaduct scene identification accuracy is improved, and the user experience is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of another identification method for viaducts according to an embodiment of the present application, applied to the electronic device depicted in fig. 1, where the electronic device includes: the identification method of the viaduct comprises the following steps:
301, acquiring a lane video of a current driving lane;
step 302, determining a first recognition result of the current driving lane according to the lane video;
step 303, receiving first sensing data acquired by the sensor module if the first identification result includes the first preset result, where the first sensing data includes: the air pressure change value and global navigation satellite positioning GNSS data;
step 304, if the first identification result includes the second preset result, receiving second sensing data acquired by the sensor module, where the second sensing data includes: the air pressure change value;
step 305, determining a second recognition result of the current driving lane according to the sensed data;
step 306, if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is the viaduct lane.
The detailed description of the steps 301 to 306 may refer to the corresponding steps of the viaduct bridge identification method described in fig. 2.
It can be seen that, in the embodiment of the application, the electronic device acquires a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; if the first recognition result comprises the first preset result, receiving first sensing data acquired by the sensor module, wherein the first sensing data comprises: the air pressure change value and global navigation satellite positioning GNSS data; if the first identification result comprises the second preset result, second sensing data collected by the sensor module is received, wherein the second sensing data comprises: the air pressure variation value; determining a second recognition result of the current driving lane according to the sensed data; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is the viaduct lane. Therefore, the viaduct lane can be identified through the lane videos, the air pressure change values and the GNSS data, the viaduct scene identification accuracy can be improved, and the user experience is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another identification method for a viaduct in the present application, which is applied to the electronic device described in fig. 1, and the electronic device includes: the identification method of the viaduct comprises the following steps:
step 401, obtaining a lane video of a current driving lane;
step 402, determining a first recognition result of the current driving lane according to the lane video;
step 403, receiving sensing data acquired by the sensor module;
step 404, if the sensing data includes the first sensing data, determining an air pressure identification result according to the air pressure change value, determining a GNSS identification result according to the GNSS data, and determining the second identification result according to the air pressure identification result and the GNSS identification result;
step 405, if the sensing data includes the second sensing data, determining an air pressure identification result according to the air pressure change value, and determining the second identification result according to the air pressure identification result;
and 406, if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is the viaduct lane.
The detailed description of the steps 401 to 406 may refer to the corresponding steps of the viaduct bridge identification method described in fig. 2.
It can be seen that, in the embodiment of the application, the electronic device acquires a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; receiving sensing data acquired by the sensor module; if the sensing data comprises the first sensing data, determining an air pressure identification result according to the air pressure change value, determining a GNSS identification result according to the GNSS data, and determining the second identification result according to the air pressure identification result and the GNSS identification result; if the sensing data comprises the second sensing data, determining an air pressure identification result according to the air pressure change value, and determining the second identification result according to the air pressure identification result; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is the viaduct lane. Therefore, the first recognition result can be determined according to the lane video, the second recognition result is determined through the air pressure change value and the GNSS data, and the viaduct lane and the viaduct scene are recognized through comparing the first recognition result and the second recognition result, so that the viaduct scene recognition accuracy is improved, and the user experience is improved.
Consistent with the embodiments shown in fig. 2, fig. 3, and fig. 4, please refer to fig. 5, and fig. 5 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application, as shown in the figure, the electronic device 500 includes an application processor 510, a memory 520, a communication interface 530, a sensor module 540, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the application processor 510, and the one or more programs 521 include instructions for:
acquiring a lane video of a current driving lane;
determining a first recognition result of the current driving lane according to the lane video;
receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data;
and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane.
It can be seen that, in the embodiment of the application, the electronic device acquires a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane. Therefore, the viaduct scene can be identified according to the lane videos and the sensing data, the viaduct scene identification accuracy is improved, and the user experience is improved.
In a possible example, in respect of the determination of the first recognition result of the current driving lane from the lane video, the instructions in the program are specifically configured to: acquiring m frames of images contained in the lane video, wherein m is an integer larger than 0; executing an image recognition algorithm on the m frames of images to acquire n frames of viaduct images containing viaduct lanes in the m frames of images, wherein n is an integer greater than or equal to 0 and less than or equal to m; calculating the m frames of images and the n frames of viaduct images according to a preset scene proportion calculation formula to obtain the viaduct scene proportion; judging whether the viaduct scene proportion is greater than a preset scene proportion threshold value or not, if so, calculating the n frames of viaduct images according to a preset occlusion rate algorithm, and determining n viaduct occlusion rates corresponding to the n frames of viaduct images; determining an average viaduct shielding rate according to the n viaduct shielding rates, and judging whether the average viaduct shielding rate is greater than a preset shielding rate threshold value or not; if the average viaduct shielding rate is not greater than the shielding rate threshold value, determining that the first identification result is a first preset result; otherwise, determining the first recognition result as a second preset result.
In a possible example, in terms of the receiving of the sensing data collected by the sensor module, the instructions in the program are specifically configured to perform the following operations: if the first recognition result comprises the first preset result, receiving first sensing data acquired by the sensor module, wherein the first sensing data comprises: the air pressure change value and global navigation satellite positioning GNSS data; if the first identification result comprises the second preset result, second sensing data collected by the sensor module are received, wherein the second sensing data comprise: the air pressure change value.
In a possible example, the instructions in the program are in particular for performing the following operations in respect of the determination of the second recognition result of the current driving lane from the sensed data: if the sensing data comprises the first sensing data, determining an air pressure identification result according to the air pressure change value, determining a GNSS identification result according to the GNSS data, and determining the second identification result according to the air pressure identification result and the GNSS identification result; and if the sensing data comprises the second sensing data, determining an air pressure identification result according to the air pressure change value, and determining the second identification result according to the air pressure identification result.
In a possible example, in the aspect of determining the air pressure identification result according to the air pressure variation value, the instructions in the program are specifically configured to perform the following operations: acquiring a calculation formula of the air pressure change value and a preset height change value; the air pressure change value is used as the input of the height change value calculation formula, and the height change value corresponding to the air pressure change value is determined; judging whether the height change value is greater than a preset height change threshold value or not, and if the height change value is greater than the height change threshold value, determining that the air pressure identification result is the first preset result; otherwise, determining that the air pressure identification result is the second preset result.
In one possible example, in said determining the GNSS identification result from the GNSS data, the instructions in the program are specifically configured to: the GNSS data includes: the GNSS identification result includes: satellite number identification results and satellite signal-to-noise ratio identification results; judging whether the satellite quantity variation value is larger than a preset quantity variation threshold value or not, if so, determining that the satellite quantity identification result is the first preset result, otherwise, determining that the satellite quantity identification result is the second preset result; judging whether the satellite signal-to-noise ratio change value is larger than a preset signal-to-noise ratio change threshold value, if so, distinguishing the satellite signal-to-noise ratio identification result as the first preset result, otherwise, confirming the satellite signal-to-noise ratio identification result as the second preset result.
In a possible example, in the case that the comparison of the first recognition result and the second recognition result is successful, the instructions in the program are specifically configured to perform the following operations: judging whether the first identification result is consistent with the second identification result, and if so, determining that the first identification result is successfully compared with the second identification result; and if the first recognition result is inconsistent with the second recognition result, determining that the first recognition result is not successfully compared with the second recognition result.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 6 is a block diagram of functional units of an overpass identification apparatus 600 according to an embodiment of the present application, where the overpass identification apparatus 600 is applied to an electronic device, and the overpass identification apparatus 600 includes an obtaining unit 601, a determining unit 602, a receiving unit 603, and a comparing unit 604, where:
an obtaining unit 601, configured to obtain a lane video of a current driving lane;
a determining unit 602, configured to determine a first recognition result of the current driving lane according to the lane video;
the receiving unit 603 is configured to receive sensing data acquired by the sensor module, and determine a second recognition result of the current driving lane according to the sensing data;
a comparing unit 604, configured to determine that the current driving lane is a viaduct lane if the first identification result and the second identification result are successfully compared.
It can be seen that, in the embodiment of the application, the electronic device acquires a lane video of a current driving lane; determining a first recognition result of the current driving lane according to the lane video; receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data; and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane. Therefore, the viaduct scene can be identified according to the lane videos and the sensing data, the viaduct scene identification accuracy is improved, and the user experience is improved.
In a possible example, in terms of the determining the first recognition result of the current driving lane according to the lane video, the determining unit 602 is specifically configured to: acquiring m frames of images contained in the lane video, wherein m is an integer larger than 0; executing an image recognition algorithm on the m frames of images to acquire n frames of viaduct images containing viaduct lanes in the m frames of images, wherein n is an integer greater than or equal to 0 and less than or equal to m; calculating the m frames of images and the n frames of viaduct images according to a preset scene proportion calculation formula to obtain the viaduct scene proportion; judging whether the viaduct scene proportion is greater than a preset scene proportion threshold value or not, if so, calculating the n frames of viaduct images according to a preset occlusion rate algorithm, and determining n viaduct occlusion rates corresponding to the n frames of viaduct images; determining an average viaduct shielding rate according to the n viaduct shielding rates, and judging whether the average viaduct shielding rate is greater than a preset shielding rate threshold value or not; if the average viaduct shielding rate is not greater than the shielding rate threshold value, determining that the first identification result is a first preset result; otherwise, determining the first recognition result as a second preset result.
In a possible example, in terms of the receiving the sensing data collected by the sensor module, the receiving unit 603 is specifically configured to: if the first recognition result comprises the first preset result, receiving first sensing data acquired by the sensor module, wherein the first sensing data comprises: the air pressure change value and global navigation satellite positioning GNSS data; if the first identification result comprises the second preset result, second sensing data collected by the sensor module are received, wherein the second sensing data comprise: the air pressure change value.
In a possible example, in terms of the determining the second recognition result of the current driving lane according to the sensed data, the receiving unit 603 is specifically configured to: if the sensing data comprises the first sensing data, determining an air pressure identification result according to the air pressure change value, determining a GNSS identification result according to the GNSS data, and determining the second identification result according to the air pressure identification result and the GNSS identification result; and if the sensing data comprises the second sensing data, determining an air pressure identification result according to the air pressure change value, and determining the second identification result according to the air pressure identification result.
In a possible example, in the aspect of determining the air pressure identification result according to the air pressure variation value, the receiving unit 603 is specifically configured to: acquiring a calculation formula of the air pressure change value and a preset height change value; the air pressure change value is used as the input of the height change value calculation formula, and the height change value corresponding to the air pressure change value is determined; judging whether the height change value is greater than a preset height change threshold value or not, and if the height change value is greater than the height change threshold value, determining that the air pressure identification result is the first preset result; otherwise, determining that the air pressure identification result is the second preset result.
In a possible example, in the aspect of determining the GNSS identification result according to the GNSS data, the receiving unit 603 is specifically configured to: the GNSS data includes: the GNSS identification result includes: satellite number identification results and satellite signal-to-noise ratio identification results; judging whether the satellite quantity variation value is larger than a preset quantity variation threshold value or not, if so, determining that the satellite quantity identification result is the first preset result, otherwise, determining that the satellite quantity identification result is the second preset result; judging whether the satellite signal-to-noise ratio change value is larger than a preset signal-to-noise ratio change threshold value, if so, distinguishing the satellite signal-to-noise ratio identification result as the first preset result, otherwise, confirming the satellite signal-to-noise ratio identification result as the second preset result.
In a possible example, in the case that the comparison between the first identification result and the second identification result is successful, the comparing unit 604 is specifically configured to: judging whether the first identification result is consistent with the second identification result, and if so, determining that the first identification result is successfully compared with the second identification result; and if the first recognition result is inconsistent with the second recognition result, determining that the first recognition result is not successfully compared with the second recognition result.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An viaduct identification method is applied to an electronic device, and the electronic device comprises the following steps: a sensor module, the method comprising:
acquiring a lane video of a current driving lane;
determining a first recognition result of the current driving lane according to the lane video;
receiving sensing data acquired by the sensor module, and determining a second recognition result of the current driving lane according to the sensing data;
and if the first recognition result is successfully compared with the second recognition result, determining that the current driving lane is a viaduct lane.
2. The method of claim 1, wherein said determining a first recognition result of the current driving lane from the lane video comprises:
acquiring m frames of images contained in the lane video, wherein m is an integer larger than 0;
executing an image recognition algorithm on the m frames of images to acquire n frames of viaduct images containing viaduct lanes in the m frames of images, wherein n is an integer greater than or equal to 0 and less than or equal to m;
calculating the m frames of images and the n frames of viaduct images according to a preset scene proportion calculation formula to obtain the viaduct scene proportion;
judging whether the viaduct scene proportion is greater than a preset scene proportion threshold value or not, if so, calculating the n frames of viaduct images according to a preset occlusion rate algorithm, and determining n viaduct occlusion rates corresponding to the n frames of viaduct images;
determining an average viaduct shielding rate according to the n viaduct shielding rates, and judging whether the average viaduct shielding rate is greater than a preset shielding rate threshold value or not;
if the average viaduct shielding rate is not greater than the shielding rate threshold value, determining that the first identification result is a first preset result;
otherwise, determining the first recognition result as a second preset result.
3. The method of claim 1, wherein the receiving the sensor data collected by the sensor module comprises:
if the first recognition result comprises the first preset result, receiving first sensing data acquired by the sensor module, wherein the first sensing data comprises: the air pressure change value and global navigation satellite positioning GNSS data;
if the first identification result comprises the second preset result, second sensing data collected by the sensor module are received, wherein the second sensing data comprise: the air pressure change value.
4. The method of claim 3, wherein said determining a second identification of said current driving lane from said sensed data comprises:
if the sensing data comprises the first sensing data, determining an air pressure identification result according to the air pressure change value, determining a GNSS identification result according to the GNSS data, and determining the second identification result according to the air pressure identification result and the GNSS identification result;
and if the sensing data comprises the second sensing data, determining an air pressure identification result according to the air pressure change value, and determining the second identification result according to the air pressure identification result.
5. The method of claim 4, wherein said determining a barometric pressure identification based on said barometric pressure change value comprises:
acquiring a calculation formula of the air pressure change value and a preset height change value;
the air pressure change value is used as the input of the height change value calculation formula, and the height change value corresponding to the air pressure change value is determined;
judging whether the height change value is greater than a preset height change threshold value or not, and if the height change value is greater than the height change threshold value, determining that the air pressure identification result is the first preset result;
otherwise, determining that the air pressure identification result is the second preset result.
6. The method of claim 5, wherein said determining the GNSS identification from the GNSS data comprises:
the GNSS data includes: the GNSS identification result includes: satellite number identification results and satellite signal-to-noise ratio identification results;
judging whether the satellite quantity variation value is larger than a preset quantity variation threshold value or not, if so, determining that the satellite quantity identification result is the first preset result, otherwise, determining that the satellite quantity identification result is the second preset result;
judging whether the satellite signal-to-noise ratio change value is larger than a preset signal-to-noise ratio change threshold value, if so, distinguishing the satellite signal-to-noise ratio identification result as the first preset result, otherwise, confirming the satellite signal-to-noise ratio identification result as the second preset result.
7. The method of claim 1, wherein the comparing the first recognition result and the second recognition result if successful comprises:
judging whether the first identification result is consistent with the second identification result, and if so, determining that the first identification result is successfully compared with the second identification result; and if the first recognition result is inconsistent with the second recognition result, determining that the first recognition result is not successfully compared with the second recognition result.
8. The viaduct bridge identification device is applied to electronic equipment, and the electronic equipment comprises: a sensor module, the device comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a lane video of a current driving lane;
the determining unit is used for determining a first recognition result of the current driving lane according to the lane video;
the receiving unit is used for receiving the sensing data acquired by the sensor module and determining a second recognition result of the current driving lane according to the sensing data;
and the comparison unit is used for determining that the current driving lane is the viaduct lane if the first identification result is successfully compared with the second identification result.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1 to 7.
CN201911296491.8A 2019-12-16 2019-12-16 Overpass identification method and related products Active CN111062320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911296491.8A CN111062320B (en) 2019-12-16 2019-12-16 Overpass identification method and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911296491.8A CN111062320B (en) 2019-12-16 2019-12-16 Overpass identification method and related products

Publications (2)

Publication Number Publication Date
CN111062320A true CN111062320A (en) 2020-04-24
CN111062320B CN111062320B (en) 2023-09-15

Family

ID=70301113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911296491.8A Active CN111062320B (en) 2019-12-16 2019-12-16 Overpass identification method and related products

Country Status (1)

Country Link
CN (1) CN111062320B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114485687A (en) * 2020-11-13 2022-05-13 博泰车联网科技(上海)股份有限公司 Vehicle position determining method and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276153A1 (en) * 2008-05-01 2009-11-05 Chun-Huang Lee Navigating method and navigation apparatus using road image identification
CN106530794A (en) * 2016-12-28 2017-03-22 上海仪电数字技术股份有限公司 Automatic identification and calibration method of driving road and system thereof
CN107657810A (en) * 2016-07-26 2018-02-02 高德信息技术有限公司 A kind of overpass action identification method and device up and down
CN108873040A (en) * 2017-05-16 2018-11-23 通用汽车环球科技运作有限责任公司 Method and apparatus for detecting road layer position
CN109872360A (en) * 2019-01-31 2019-06-11 斑马网络技术有限公司 Localization method and device, storage medium, electric terminal
CN110164164A (en) * 2019-04-03 2019-08-23 浙江工业大学之江学院 The method for identifying complicated road precision using camera shooting function enhancing Mobile Telephone Gps software

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276153A1 (en) * 2008-05-01 2009-11-05 Chun-Huang Lee Navigating method and navigation apparatus using road image identification
CN107657810A (en) * 2016-07-26 2018-02-02 高德信息技术有限公司 A kind of overpass action identification method and device up and down
CN106530794A (en) * 2016-12-28 2017-03-22 上海仪电数字技术股份有限公司 Automatic identification and calibration method of driving road and system thereof
CN108873040A (en) * 2017-05-16 2018-11-23 通用汽车环球科技运作有限责任公司 Method and apparatus for detecting road layer position
CN109872360A (en) * 2019-01-31 2019-06-11 斑马网络技术有限公司 Localization method and device, storage medium, electric terminal
CN110164164A (en) * 2019-04-03 2019-08-23 浙江工业大学之江学院 The method for identifying complicated road precision using camera shooting function enhancing Mobile Telephone Gps software

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114485687A (en) * 2020-11-13 2022-05-13 博泰车联网科技(上海)股份有限公司 Vehicle position determining method and related device
CN114485687B (en) * 2020-11-13 2023-09-26 博泰车联网科技(上海)股份有限公司 Vehicle position determining method and related device

Also Published As

Publication number Publication date
CN111062320B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN109241859B (en) Fingerprint identification method and related product
CN110636174B (en) Bus code calling method and mobile terminal
CN109977845B (en) Driving region detection method and vehicle-mounted terminal
CN106484283A (en) A kind of display control method and mobile terminal
CN111475072B (en) Payment information display method and electronic equipment
CN114501119B (en) Interactive display method, device, electronic equipment, system and storage medium
CN110784672B (en) Video data transmission method, device, equipment and storage medium
CN111126995A (en) Payment method and electronic equipment
CN108510267B (en) Account information acquisition method and mobile terminal
CN112052778A (en) Traffic sign identification method and related device
CN111191606A (en) Image processing method and related product
US20140348334A1 (en) Portable terminal and method for detecting earphone connection
CN110796673A (en) Image segmentation method and related product
CN108230680B (en) Vehicle behavior information acquisition method and device and terminal
CN111062320B (en) Overpass identification method and related products
CN111343321B (en) Backlight brightness adjusting method and related product
CN109040457B (en) Screen brightness adjusting method and mobile terminal
CN116824548A (en) Obstacle determination method, device, equipment and readable storage medium
CN107358183A (en) Living iris detection method and Related product
CN111427644A (en) Target behavior identification method and electronic equipment
CN109685850B (en) Transverse positioning method and vehicle-mounted equipment
CN108833660B (en) Parking space information processing method and device and mobile terminal
CN110046569B (en) Unmanned driving data processing method and device and electronic equipment
CN110795713B (en) Fingerprint verification method and device
CN112435671A (en) Intelligent voice control method and system for accurately recognizing Chinese

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant