CN109489680B - Reference trajectory line generation method for spiral lane and vehicle-mounted equipment - Google Patents

Reference trajectory line generation method for spiral lane and vehicle-mounted equipment Download PDF

Info

Publication number
CN109489680B
CN109489680B CN201811644294.6A CN201811644294A CN109489680B CN 109489680 B CN109489680 B CN 109489680B CN 201811644294 A CN201811644294 A CN 201811644294A CN 109489680 B CN109489680 B CN 109489680B
Authority
CN
China
Prior art keywords
target
marks
reference trajectory
fitting
spiral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811644294.6A
Other languages
Chinese (zh)
Other versions
CN109489680A (en
Inventor
魏宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811644294.6A priority Critical patent/CN109489680B/en
Publication of CN109489680A publication Critical patent/CN109489680A/en
Application granted granted Critical
Publication of CN109489680B publication Critical patent/CN109489680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a reference trajectory generation method of a spiral lane and vehicle-mounted equipment, wherein the method comprises the following steps: acquiring a first image shot by a camera; identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model; and fitting according to the plurality of target guide marks to obtain a reference trajectory line of the spiral lane. The invention can reduce the cost of reference track line generation.

Description

Reference trajectory line generation method for spiral lane and vehicle-mounted equipment
Technical Field
The invention relates to the technical field of automobiles, in particular to a reference trajectory line generation method of a spiral lane and vehicle-mounted equipment.
Background
With the development of intelligent automobiles, the automatic driving technology is more mature. In a valet parking service scene, a vehicle usually needs to pass through a section of spiral ramp from the ground to an underground garage or from the underground garage to the ground, and a stable and reliable reference trajectory line is a key problem to be solved when an automatic driven automobile smoothly passes through the special section. In the prior art, a high-precision map is usually required to be combined with vehicle positioning to realize generation of a reference trajectory, so that the cost is high.
Disclosure of Invention
The embodiment of the invention provides a reference trajectory generation method of a spiral lane and vehicle-mounted equipment, and aims to solve the problem that the existing reference trajectory generation cost is high.
In a first aspect, an embodiment of the present invention provides a reference trajectory line generation method for a spiral lane, including:
acquiring a first image shot by a camera;
identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model;
and fitting according to the plurality of target guide marks to obtain a reference trajectory line of the spiral lane.
In a second aspect, an embodiment of the present invention further provides an on-board device, including:
the first acquisition module is used for acquiring a first image shot by the camera;
the identification module is used for identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model;
and the fitting module is used for fitting according to the plurality of target guide marks to obtain a reference trajectory line of the spiral lane.
In a third aspect, an embodiment of the present invention further provides an on-board device, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the above-mentioned reference trajectory line generation method for a spiral lane.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the above-mentioned reference trajectory line generation method for a spiral lane.
According to the embodiment of the invention, the first image shot by the camera is input into the target identification network model utilizing deep learning to identify the target guide marks, and after a plurality of target guide marks are obtained through identification, curve fitting is carried out to obtain the reference trajectory. Compared with the prior art that the reference trajectory line is generated by combining a high-precision map with vehicle positioning, the method and the device can reduce the cost for generating the reference trajectory line.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a reference trajectory line generation method for a spiral lane according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a first image in a reference trajectory line generation method for a spiral lane according to an embodiment of the present invention;
fig. 3 is a second flowchart of a reference trajectory line generation method for a spiral lane according to an embodiment of the present invention;
fig. 4 is a structural diagram of an in-vehicle apparatus provided in an embodiment of the present invention;
fig. 5 is a structural diagram of an in-vehicle device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a reference trajectory generation method for a spiral lane according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
step 101, acquiring a first image shot by a camera;
the reference trajectory generation method for the spiral lane provided by the embodiment is mainly applied to vehicle-mounted equipment and used for generating the reference trajectory in the spiral lane scene, wherein the reference trajectory is used for assisting automatic driving. Specifically, the manner of performing the automatic driving control based on the reference trajectory line will not be further described here.
The camera may be mounted inside the vehicle for capturing video images or the like of the front of the vehicle. In this embodiment, when entering or about to enter the spiral lane, the target image captured by the camera may be obtained in real time to perform target guidance mark recognition, or the target image captured by the camera may be obtained once every preset time period to perform target guidance mark recognition, which is not further limited herein.
The first image includes a plurality of guide marks, and as shown in fig. 2, a plurality of guide marks are disposed on a wall surface of the spiral lane. The plurality of guide indications includes first guide marks 2011 uniformly spaced on the left wall 201 and second guide marks 2021 uniformly spaced on the right wall 202. The target guide mark is a guide mark on any one of the walls, for example, the target guide mark may be a first guide mark 2011 on the left wall 201 or a second guide mark 2021 on the right wall 202. The first image may only include the guiding mark on one wall, or may also include the guiding marks on two walls, which is not further limited herein.
It should be noted that the specific form of the above-mentioned indication mark can be set according to actual needs, and in general, the indication mark is a rectangular frame, and the direction is indicated in the rectangular frame. As shown in fig. 2, the directions indicated in the indication marks on the first sidewall are all the same, and the directions indicated in the indication marks on the second sidewall are all the same.
Step 102, identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model;
the target guide mark is a guide mark on the first side wall or a guide mark on the second wall. As shown in fig. 2, when the vehicle enters the garage from the left lane 203, the target guide mark is a guide mark on the left wall, i.e. a guide mark on the first side wall; when the vehicle leaves the garage from the right lane 204, the target guide mark is a guide mark on the right side wall, i.e., the guide mark on the second side wall.
And 103, fitting according to the plurality of target guide marks to obtain a reference trajectory line of the spiral lane.
In this embodiment, after the plurality of target guide marks are obtained by identification, curve fitting may be performed according to the plurality of target guide marks to obtain the reference trajectory.
According to the embodiment of the invention, the first image shot by the camera is input into the target identification network model utilizing deep learning to identify the target guide marks, and after a plurality of target guide marks are obtained through identification, curve fitting is carried out to obtain the reference trajectory. Compared with the prior art that the reference trajectory line is generated by combining a high-precision map with vehicle positioning, the method and the device can reduce the cost for generating the reference trajectory line.
It should be noted that the manner of fitting the target guide marks to obtain the reference track line may be set according to actual needs, for example, in this embodiment, the reference track line may be obtained by fitting a curve based on the upper edge line or the lower edge line of the target guide mark. Specifically, the step 103 may include:
identifying a target edge line of each target guide mark, wherein the target edge line is an upper edge line or a lower edge line of the target guide mark;
in this embodiment, the method for detecting the target edge of the target mark may be set according to actual needs, for example, a canny edge detection method may be adopted for detection. That is, the identifying the target edge line of each target guiding mark comprises: and carrying out canny edge detection on each target guide mark to obtain a target edge line of each target guide mark.
And fitting the target edge lines of the target guide marks to obtain a reference trajectory line of the spiral lane.
The fitting of the target edge line of each target guidance indicator may be performed in different manners according to actual needs, and in an optional embodiment, the fitting of the curve may be performed by using a 3-degree polynomial, that is, the fitting of the target edge line of each target guidance indicator to obtain the reference trajectory line of the spiral lane includes: and performing 3 times of curve fitting on the target edge line of each target guide mark to obtain a reference trajectory line of the spiral lane.
It should be noted that the training mode of the target recognition network model may be set according to actual needs, for example, in this embodiment, as shown in fig. 3, before the step 101, the method further includes:
104, acquiring second images of a plurality of spiral lanes, wherein the second images comprise guide marks;
step 105, after labeling the guide mark in the second image, inputting the second image into a deep learning network model, and training to obtain the target recognition network model;
the guiding marks on the first side wall body in the spiral lane are used as first type guiding marks, the guiding marks on the second side wall body are used as second type guiding marks, and the target guiding marks are first type guiding marks or second type guiding marks.
In this embodiment, the second image may be a plurality of images collected by the same camera, or may also be a plurality of images collected by different cameras, and the camera may be any type of camera, for example, a wide-angle camera, or a non-wide-angle camera (i.e., a common camera).
The guide mark included in one second image may be a first type guide mark, may also be a second type guide mark, and may also include both the first type guide mark and the second type guide mark, which is not further limited herein. First, all the guide marks in the second image can be marked manually, and the rest parts of the second image except the guide marks marked by the user are taken as backgrounds. When the target recognition network model is trained, the first type of guide marks and the second type of guide marks are used as the output of the network model, and the second image of the marks is input into the network model, so that the target recognition network model is obtained through training.
The way of labeling the guide mark in the second image may be: the user selects the real guide sign through the form frame of the selection frame.
It should be noted that, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or implemented separately, and the embodiments of the present invention are not limited thereto.
Referring to fig. 4, fig. 4 is a structural diagram of an in-vehicle device provided in an embodiment of the present invention, and as shown in fig. 4, an in-vehicle device 400 includes:
a first obtaining module 401, configured to obtain a first image captured by a camera;
an identification module 402 for identifying a plurality of target guide markers in the first image using a deep-learned target recognition network model;
a fitting module 403, configured to fit according to the plurality of target guide marks to obtain a reference trajectory of the spiral lane.
Optionally, the fitting module 403 includes:
the identification unit is used for identifying target edge lines of each target guide mark, and the target edge lines are upper edge lines or lower edge lines of the target guide marks;
and the fitting unit is used for fitting the target edge lines of the target guide marks to obtain the reference trajectory line of the spiral lane.
Optionally, the fitting unit is specifically configured to: and performing 3 times of curve fitting on the target edge line of each target guide mark to obtain a reference trajectory line of the spiral lane.
Optionally, the identification unit is specifically configured to: and carrying out canny edge detection on each target guide mark to obtain a target edge line of each target guide mark.
Optionally, the vehicle-mounted device 400 further includes:
the second acquisition module is used for acquiring second images of the spiral lanes, and the second images comprise guide marks;
the training module is used for inputting the second image into a deep learning network model after labeling the guide marks in the second image, and training to obtain the target recognition network model;
the guiding marks on the first side wall body in the spiral lane are used as first type guiding marks, the guiding marks on the second side wall body are used as second type guiding marks, and the target guiding marks are first type guiding marks or second type guiding marks.
The vehicle-mounted device provided by the embodiment of the present invention can implement each process implemented by the vehicle-mounted device in the method embodiments of fig. 1 to fig. 3, and is not described herein again to avoid repetition. The method comprises the steps of inputting a first image shot by a camera into a target identification network model utilizing deep learning, identifying target guide marks, and after a plurality of target guide marks are obtained through identification, performing curve fitting to obtain a reference trajectory line. Compared with the prior art that the reference trajectory line is generated by combining a high-precision map with vehicle positioning, the method and the device can reduce the cost for generating the reference trajectory line.
Fig. 5 is a schematic diagram of a hardware structure of an on-vehicle device for implementing various embodiments of the present invention.
The in-vehicle apparatus 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the in-vehicle apparatus configuration shown in fig. 5 does not constitute a limitation of the in-vehicle apparatus, and the in-vehicle apparatus may include more or less components than those shown, or combine some components, or a different arrangement of components.
The processor 510 is configured to acquire a first image captured by a camera; identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model; and fitting according to the plurality of target guide marks to obtain a reference trajectory line of the spiral lane.
Optionally, the fitting by the processor 510 according to the plurality of target guide marks to obtain the reference trajectory line of the spiral lane includes:
identifying a target edge line of each target guide mark, wherein the target edge line is an upper edge line or a lower edge line of the target guide mark;
and fitting the target edge lines of the target guide marks to obtain a reference trajectory line of the spiral lane.
Optionally, the processor 510 is specifically configured to:
and performing 3 times of curve fitting on the target edge line of each target guide mark to obtain a reference trajectory line of the spiral lane.
Optionally, the processor 510 is specifically configured to:
and carrying out canny edge detection on each target guide mark to obtain a target edge line of each target guide mark.
Optionally, the processor 510 is specifically configured to:
acquiring second images of a plurality of spiral lanes, wherein the second images comprise guide marks;
after labeling the guide marks in the second image, inputting the second image into a deep learning network model, and training to obtain the target recognition network model;
the guiding marks on the first side wall body in the spiral lane are used as first type guiding marks, the guiding marks on the second side wall body are used as second type guiding marks, and the target guiding marks are first type guiding marks or second type guiding marks.
According to the embodiment of the invention, the first image shot by the camera is input into the target identification network model utilizing deep learning to identify the target guide marks, and after a plurality of target guide marks are obtained through identification, curve fitting is carried out to obtain the reference trajectory. Compared with the prior art that the reference trajectory line is generated by combining a high-precision map with vehicle positioning, the method and the device can reduce the cost for generating the reference trajectory line.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The in-vehicle device provides wireless broadband internet access to the user through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the in-vehicle apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The in-vehicle device 500 further includes at least one sensor 505, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the in-vehicle apparatus 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of the vehicle-mounted device (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), and identify the related functions of vibration (such as pedometer and tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the in-vehicle apparatus. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the in-vehicle device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the in-vehicle device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the in-vehicle apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the in-vehicle apparatus 500 or may be used to transmit data between the in-vehicle apparatus 500 and the external device.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the in-vehicle device, connects various parts of the entire in-vehicle device by various interfaces and lines, and performs various functions of the in-vehicle device and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the in-vehicle device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The in-vehicle device 500 may further include a power supply 511 (such as a battery) for supplying power to each component, and preferably, the power supply 511 may be logically connected to the processor 510 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
In addition, the in-vehicle apparatus 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an on-board device, which includes a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program is executed by the processor 510 to implement each process of the above-mentioned embodiment of the reference trajectory generation method for a spiral lane, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned method for generating a reference trajectory line of a spiral lane, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A reference trajectory line generation method for a spiral lane, comprising:
acquiring a first image shot by a camera;
identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model;
fitting according to the target guide marks to obtain a reference trajectory line of the spiral lane;
before the image shot by the camera is obtained, the method further comprises:
acquiring second images of a plurality of spiral lanes, wherein the second images comprise guide marks;
after labeling the guide marks in the second image, inputting the second image into a deep learning network model, and training to obtain the target recognition network model;
the guiding marks on the first side wall body in the spiral lane are used as first type guiding marks, the guiding marks on the second side wall body are used as second type guiding marks, and the target guiding marks are first type guiding marks or second type guiding marks.
2. The method of claim 1, wherein said fitting from the plurality of target directions indicators to obtain a reference trajectory line for a helical lane comprises:
identifying a target edge line of each target guide mark, wherein the target edge line is an upper edge line or a lower edge line of the target guide mark;
and fitting the target edge lines of the target guide marks to obtain a reference trajectory line of the spiral lane.
3. The method of claim 2, wherein said fitting the target edge lines of each of the target direction indicators to obtain the reference trajectory line of the spiral lane comprises:
and performing 3 times of curve fitting on the target edge line of each target guide mark to obtain a reference trajectory line of the spiral lane.
4. The method of claim 2, wherein the identifying the target edge line of each of the target guideline indicators comprises:
and carrying out canny edge detection on each target guide mark to obtain a target edge line of each target guide mark.
5. An in-vehicle apparatus, characterized by comprising:
the first acquisition module is used for acquiring a first image shot by the camera;
the identification module is used for identifying a plurality of target guide marks in the first image by utilizing a deep learning target identification network model;
the fitting module is used for fitting according to the target guide marks to obtain a reference trajectory line of the spiral lane;
the in-vehicle apparatus further includes:
the second acquisition module is used for acquiring second images of the spiral lanes, and the second images comprise guide marks;
the training module is used for inputting the second image into a deep learning network model after labeling the guide marks in the second image, and training to obtain the target recognition network model;
the guiding marks on the first side wall body in the spiral lane are used as first type guiding marks, the guiding marks on the second side wall body are used as second type guiding marks, and the target guiding marks are first type guiding marks or second type guiding marks.
6. The in-vehicle apparatus according to claim 5, wherein the fitting module includes:
the identification unit is used for identifying target edge lines of each target guide mark, and the target edge lines are upper edge lines or lower edge lines of the target guide marks;
and the fitting unit is used for fitting the target edge lines of the target guide marks to obtain the reference trajectory line of the spiral lane.
7. The vehicle-mounted device according to claim 6, wherein the fitting unit is specifically configured to: and performing 3 times of curve fitting on the target edge line of each target guide mark to obtain a reference trajectory line of the spiral lane.
8. The vehicle-mounted device according to claim 6, wherein the identification unit is specifically configured to: and carrying out canny edge detection on each target guide mark to obtain a target edge line of each target guide mark.
9. An in-vehicle apparatus comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the reference trajectory generation method of a spiral lane as claimed in any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the reference trajectory generation method of a spiral lane of any one of claims 1 to 4.
CN201811644294.6A 2018-12-29 2018-12-29 Reference trajectory line generation method for spiral lane and vehicle-mounted equipment Active CN109489680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811644294.6A CN109489680B (en) 2018-12-29 2018-12-29 Reference trajectory line generation method for spiral lane and vehicle-mounted equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811644294.6A CN109489680B (en) 2018-12-29 2018-12-29 Reference trajectory line generation method for spiral lane and vehicle-mounted equipment

Publications (2)

Publication Number Publication Date
CN109489680A CN109489680A (en) 2019-03-19
CN109489680B true CN109489680B (en) 2020-12-22

Family

ID=65712066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811644294.6A Active CN109489680B (en) 2018-12-29 2018-12-29 Reference trajectory line generation method for spiral lane and vehicle-mounted equipment

Country Status (1)

Country Link
CN (1) CN109489680B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103968837A (en) * 2014-04-25 2014-08-06 惠州华阳通用电子有限公司 Method and device for correcting calibration factor of gyroscope in inertial navigation system
CN107063236A (en) * 2017-05-03 2017-08-18 蒋劭恒 Positioning and navigation system and implementation method in a kind of building

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7151844B2 (en) * 2001-12-06 2006-12-19 General Motors Corporation Image sensor method and apparatus having hardware implemented edge detection processing
CN102298693B (en) * 2011-05-18 2013-04-24 浙江大学 Expressway bend detection method based on computer vision
CN108090456B (en) * 2017-12-27 2020-06-19 北京初速度科技有限公司 Training method for recognizing lane line model, and lane line recognition method and device
CN108573242A (en) * 2018-04-26 2018-09-25 南京行车宝智能科技有限公司 A kind of method for detecting lane lines and device
CN108764075A (en) * 2018-05-15 2018-11-06 北京主线科技有限公司 The method of container truck location under gantry crane
CN108830165A (en) * 2018-05-22 2018-11-16 南通职业大学 A kind of method for detecting lane lines considering front truck interference

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103968837A (en) * 2014-04-25 2014-08-06 惠州华阳通用电子有限公司 Method and device for correcting calibration factor of gyroscope in inertial navigation system
CN107063236A (en) * 2017-05-03 2017-08-18 蒋劭恒 Positioning and navigation system and implementation method in a kind of building

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic geometry measurement for curved ramps using inertial measurement unit and 3D LiDAR system;Wenting LuoLin Li;《Automation in Construction》;20181031;全文 *
基于机器视觉和姿态检测的循迹智能车;王天河等;《兵工自动化》;20120815;全文 *

Also Published As

Publication number Publication date
CN109489680A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109977845B (en) Driving region detection method and vehicle-mounted terminal
CN107977144B (en) Screen capture processing method and mobile terminal
CN109381165B (en) Skin detection method and mobile terminal
CN108153422B (en) Display object control method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN110519699B (en) Navigation method and electronic equipment
CN110795949A (en) Card swiping method and device, electronic equipment and medium
CN110110571B (en) Code scanning method and mobile terminal
CN109784234B (en) Right-angled bend identification method based on forward fisheye lens and vehicle-mounted equipment
CN111126995A (en) Payment method and electronic equipment
CN111107219B (en) Control method and electronic equipment
CN110571952B (en) Wireless charging method and related equipment
CN112927539B (en) Mapping method and device for automatic parking
CN108196663B (en) Face recognition method and mobile terminal
CN109711477B (en) Automatic driving model training method and device
CN111103607B (en) Positioning prompting method and electronic equipment
CN110470293B (en) Navigation method and mobile terminal
CN110474436B (en) Wireless charging method and related equipment
CN109474889B (en) Information transmission method, mobile terminal and server
CN107730030B (en) Path planning method and mobile terminal
CN109246305B (en) Navigation processing method of navigation equipment, navigation equipment and mobile terminal
CN109489680B (en) Reference trajectory line generation method for spiral lane and vehicle-mounted equipment
CN110046569B (en) Unmanned driving data processing method and device and electronic equipment
CN110647892A (en) Violation alarm prompting method, terminal and computer readable storage medium
CN110576765A (en) Wireless charging method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant