CN110796084A - Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium - Google Patents

Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium Download PDF

Info

Publication number
CN110796084A
CN110796084A CN201911039994.7A CN201911039994A CN110796084A CN 110796084 A CN110796084 A CN 110796084A CN 201911039994 A CN201911039994 A CN 201911039994A CN 110796084 A CN110796084 A CN 110796084A
Authority
CN
China
Prior art keywords
image
lane line
sub
line image
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911039994.7A
Other languages
Chinese (zh)
Inventor
丁磊
赵磊
赵岩峰
柴丽颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Autopilot Technology Co Ltd
Original Assignee
Human Horizons Shanghai Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Autopilot Technology Co Ltd filed Critical Human Horizons Shanghai Autopilot Technology Co Ltd
Priority to CN201911039994.7A priority Critical patent/CN110796084A/en
Publication of CN110796084A publication Critical patent/CN110796084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a lane line identification method, a lane line identification device, lane line identification equipment and a computer readable storage medium. The method comprises the steps of obtaining a lane line image; acquiring a first sub-image and a second sub-image from the lane line image, wherein the live-action interesting area corresponding to the first sub-image is larger than the live-action interesting area corresponding to the second sub-image; and inputting the first sub-image and the second sub-image into an image recognition model, and recognizing the lane line in the lane line image. The embodiment of the application can enable the image recognition model to recognize the complete lane line, and can improve the accuracy of recognizing the lane line at a distance.

Description

Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium
Technical Field
The present disclosure relates to the field of image recognition, and more particularly, to a lane line recognition method, apparatus, device, and computer-readable storage medium.
Background
In the field of intelligent driving, lane line information is important environmental perception information. After the lane line image is captured, the lane line in the image may be recognized using an image recognition model, thereby acquiring lane line information. For example, a deep learning neural network model is used to identify lane lines in the image. The image recognition model is affected by the image resolution, and often the fine parts in the image are not accurately recognized, so that the recognition accuracy of the image recognition model to the far lane lines is not high.
Disclosure of Invention
The embodiment of the application provides a lane line identification method, a lane line identification device, lane line identification equipment and a computer readable storage medium, which are used for solving the problems in the related art, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a lane line identification method, including:
acquiring a lane line image;
acquiring a first sub-image and a second sub-image from the lane line image, wherein the live-action interesting area corresponding to the first sub-image is larger than the live-action interesting area corresponding to the second sub-image;
and inputting the first sub-image and the second sub-image into an image recognition model, and recognizing the lane line in the lane line image.
In one embodiment, acquiring the first sub-image and the second sub-image from the lane line image includes:
acquiring a corresponding first sub-image from a current frame lane line image;
determining a lane line disappearing position in the recognized lane line image according to the recognized lane line in the recognized lane line image;
determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the identified lane line image;
and acquiring a second sub-image corresponding to the current frame lane line image from the current frame lane line image according to the position information of the second sub-image and the preset first size information.
In one embodiment, determining the position information of the second sub-image corresponding to the current frame lane line image according to the lane line disappearance position in the recognized lane line image includes:
and taking the lane line disappearance position in the recognized lane line image as the central position of the second sub-image corresponding to the current frame lane line image.
In one embodiment, the lane line image is acquired by a camera on the vehicle;
determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the recognized lane line image, wherein the position information comprises the following steps:
acquiring the current steering angle of a steering wheel of a vehicle;
and determining the position information of the second sub-image corresponding to the current lane line image according to the lane line disappearance position in the recognized lane line image and the current steering angle of the steering wheel of the vehicle.
In one embodiment, acquiring a corresponding first sub-image from a current frame lane line image includes:
and acquiring a corresponding first sub-image from the current frame lane line image according to the preset second size information and the preset position information.
In one embodiment, the position information of the second sub-image corresponding to the first frame lane line image is determined based on the position information of the horizon in the first frame lane line image.
In one embodiment, inputting the first sub-image and the second sub-image into an image recognition model comprises:
adjusting the resolution of the first sub-image and/or the second sub-image to make the length or width of the first sub-image and the second sub-image equal;
splicing the first sub-image and the second sub-image along the equal length or width to form an image to be identified;
and inputting the image to be recognized into the image recognition model.
In a second aspect, an embodiment of the present application provides a lane line identification apparatus, including:
the first acquisition module is used for acquiring a lane line image;
the second acquisition module is used for acquiring a first sub-image and a second sub-image from the lane line image, wherein the live-action interesting area corresponding to the first sub-image is larger than the live-action interesting area corresponding to the second sub-image;
and the recognition module is used for inputting the first sub-image and the second sub-image into the image recognition model and recognizing the lane lines in the lane line image.
In one embodiment, the second obtaining module includes:
the first acquisition unit is used for acquiring a corresponding first sub-image from the current frame lane line image;
a first determining unit configured to determine a lane line disappearance position in the recognized lane line image based on the recognized lane line in the recognized lane line image;
the second determining unit is used for determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the identified lane line image;
and the second acquisition unit is used for acquiring a second sub-image corresponding to the current frame lane line image from the current frame lane line image according to the position information of the second sub-image and the preset first size information.
In a third aspect, an embodiment of the present application provides a lane line identification device, where the lane line identification device includes: a memory and a processor. Wherein the memory and the processor are in communication with each other via an internal connection path, the memory is configured to store instructions, the processor is configured to execute the instructions stored by the memory, and the processor is configured to perform the method of any of the above aspects when the processor executes the instructions stored by the memory.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program runs on a computer, the method in any one of the above-mentioned aspects is executed.
The advantages or beneficial effects in the above technical solution at least include:
the input information of the image recognition model comprises a first sub-image acquired from the lane line image, and the first sub-image corresponds to a larger live-action interesting area, so that the image recognition model can recognize a complete lane line. And the input information of the image recognition model also comprises a second sub-image acquired from the lane line image, and the second sub-image corresponds to a smaller live-action interesting area, so that the resolution can be better preserved when the image recognition model is input. Through above-mentioned technical scheme, can improve the degree of accuracy of discerning lane line far away.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is a flowchart of a lane line identification method according to an embodiment of the present application;
FIG. 2 is a diagram of a first sub-image and a second sub-image according to an embodiment of the present application;
fig. 3 is a flowchart of a lane line identification method according to an embodiment of the present application;
fig. 4 is a flowchart of a lane line identification method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a lane line identification apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a lane line identification apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of a lane line recognition apparatus according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Identifying lane lines using an image recognition model (e.g., a deep learning neural network model) may include: the shot image of the lane line is preprocessed to obtain an image which meets the input requirement of the model, the preprocessed image is input into the image recognition model, the lane line can be recognized, the pixel coordinate of the lane line can be obtained, and the curve equation of the lane line can be obtained according to the pixel coordinate of the lane line. The model input requirement may include a resolution of the image, in this case, before the lane line image is input into the image recognition model, the resolution needs to be adjusted, and a far lane line may not be clear after the resolution is adjusted, so that the far lane line cannot be accurately recognized.
Fig. 1 shows a flowchart of a lane line identification method according to an embodiment of the present application. As shown in fig. 1, the lane line identification method may include:
and step S101, acquiring a lane line image.
The lane line image can be obtained by shooting through a camera. For example, it may be acquired by a camera on the vehicle. In the intelligent driving field, a camera can be arranged in front of a vehicle to shoot images of a road in front, so that lane line images are obtained, and lane line information which can be used for indicating driving operation is obtained.
Step S102, a first sub-image and a second sub-image are obtained from the lane line image, and the live-action interesting area corresponding to the first sub-image is larger than the live-action interesting area corresponding to the second sub-image.
Referring to fig. 2, fig. 2 is a schematic diagram of a first sub-image 202 and a second sub-image 203 acquired from a lane line image 201. The first sub-image 202 and the second sub-image 203 correspond to real world regions of interest (ROIs) of different sizes. For example, as shown in fig. 2, the first sub-image 201 corresponds to a large real-scene range, the ROI may include the entire lane line, the second sub-image 202 corresponds to a small real-scene range, and the ROI may include a partial lane line, for example, including a lane line disappearance position.
The first sub-image and the second sub-image are acquired from the lane line image, and various embodiments are possible.
As an example, the position information and the size information may be preset, and the first sub-image and the second sub-image may be acquired by performing image capturing on the lane line image. For example, the center positions of the first sub-image and the second sub-image are set, and then the first sub-image and the second sub-image corresponding to different live-action interesting regions can be obtained by performing image interception on the lane line image according to different lengths and widths. By setting the size of the first sub-image to be larger than that of the second sub-image, the live-action ROI corresponding to the first sub-image can be larger than that corresponding to the second sub-image.
As another example, an algorithm may be further provided to obtain the first sub-image and the second sub-image by calculating position information and/or size information of the first sub-image and the second sub-image. For example, in the intelligent driving process, continuous multiple frames of lane line images may be acquired, the center positions of the first sub-image and the second sub-image in the current frame of lane line image may be determined according to the lane line position information in the previous frame of lane line image, and the sizes of the first sub-image and the second sub-image may be determined according to the preset size information.
And step S103, inputting the first sub-image and the second sub-image into an image recognition model, and recognizing the lane line in the lane line image.
The first sub-image and the second sub-image can be input into the image recognition model respectively or spliced together to input the image recognition model. Illustratively, step S103 includes adjusting the resolution of the first sub-image and/or the second sub-image to make the length or width of the first sub-image and the second sub-image equal; splicing the first sub-image and the second sub-image along the equal length or width to form an image to be identified; and inputting the image to be recognized into the image recognition model. The first sub-image and the second sub-image are equal in length or width, and the two images can be stitched together without changing the length-width ratio, so that the image recognition model receives one image to be input to associate the first sub-image with the second sub-image.
The image recognition model can identify the lane lines in the first sub-image and/or the second sub-image, and then determine the lane line information in the lane line image according to the proportional relation between the first sub-image and the lane line image and the proportional relation between the second sub-image and the lane line image.
In this way, the input information of the image recognition model includes the first sub-image obtained from the lane line image, and the first sub-image corresponds to the larger live-action interesting region, so that the image recognition model can recognize the complete lane line. And the input information of the image recognition model also comprises a second sub-image acquired from the lane line image, and the second sub-image corresponds to a smaller live-action interesting area, so that the resolution can be better preserved when the image recognition model is input. Through above-mentioned technical scheme, can improve the degree of accuracy of discerning lane line far away.
As an exemplary embodiment, as shown in fig. 3, step S102 may include:
s301, acquiring a corresponding first sub-image from a current frame lane line image;
step S302, determining a lane line disappearing position in the recognized lane line image according to the recognized lane line in the recognized lane line image;
step S303, determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the identified lane line image;
step S304, according to the position information of the second sub-image and the preset first size information, the second sub-image corresponding to the current frame lane line image is obtained from the current frame lane line image.
In an exemplary embodiment, the position information of the second sub-image is determined according to the lane line disappearance position in the recognized lane line image, and then the second sub-image corresponding to the current frame lane line image is acquired in combination with the position information and the predetermined first size information. In this way, for each frame of lane line image in the continuous multi-frame lane line images, the position information of the corresponding second sub-image can be kept related to the lane line disappearance position, so that the live-action ROI corresponding to the second sub-image can include a far lane line, and the effect of improving the identification accuracy of the far lane line is achieved.
Illustratively, the identified lane line image may include a lane line image of an arbitrary frame prior to the current frame. For example, the recognized lane line image may be a lane line image in which a lane line is newly recognized. Based on the processing speed of the image recognition model and the speed of image acquisition, the lane line image in which the lane line is newly recognized may be the lane line image of the previous frame or may be the lane line image several frames before.
In an exemplary embodiment, a step of determining position information of a second sub-image corresponding to the first frame lane line image may be further included. For the first frame of lane line image, there is no previous frame of lane line image, and the step of determining the position information of the corresponding second sub-image may be as follows:
as an example one, predetermined initial position information may be employed as the position information of the second sub-image corresponding to the first frame lane line image. The initial position information may be set in advance by a person skilled in the art based on experience as the position information of the second sub-image corresponding to the first frame lane line image.
Example two, the position information of the second sub-image corresponding to the first frame lane line image is determined according to the position information of the horizon in the first frame lane line image. The position information of the horizon can be obtained by camera calibration. In some embodiments, before processing the lane line image, a coordinate system in the image is set, and a zero point of a vertical axis of the coordinate system may be determined according to a horizon in the image. Therefore, in setting the coordinate system in the image, the position information of the horizon can be determined. In this example, a position point on the horizon may be selected as the center position of the second sub-image. For example, the center point of the horizon is selected as the center position of the second sub-image.
The step S303 may be implemented in various ways.
For example, the lane line disappearance position in the recognized lane line image is used as the center position of the second sub-image corresponding to the current frame lane line image.
As another example, the position information of the second sub-image is determined in combination with the lane line disappearance position and the vehicle steering angle in the recognized lane line image. The lane line image may be acquired by a camera on the vehicle. As shown in fig. 4, the step S303 of determining the position information of the second sub-image corresponding to the current lane line image according to the lane line disappearance position in the recognized lane line image may include:
s401, acquiring the current steering angle of a steering wheel of a vehicle;
and S402, determining the position information of the second sub-image corresponding to the current lane line image according to the lane line disappearing position in the recognized lane line image and the current steering wheel steering angle of the vehicle.
For example, the position information of the second sub-image here includes a center position of the second sub-image or a top-left vertex position of the second sub-image, etc. which may be used to determine a reference point of the second sub-image position. Step S402 may set the center position of the second sub-image as a position where the reference position is moved by a certain distance in the offset direction, with the lane line disappearance position in the recognized lane line image as the reference position, the vehicle traveling direction indicated by the current steering angle of the vehicle as the offset direction, and the reference position. Alternatively, a function may be established in which the lane line disappearance position and the vehicle steering wheel angle of the recognized lane line image are input and the center position of the second sub-image is output, and step S402 may be to calculate the position information of the second sub-image by the function. Through the technical scheme, the driving direction of the vehicle can be combined, so that the real scene ROI corresponding to the second sub-image always comprises the far lane line, and the effect of improving the recognition accuracy of the far lane line is achieved.
As an exemplary embodiment, the step 301 of acquiring a corresponding first sub-image from the current frame lane line image may include: and acquiring a corresponding first sub-image from the current frame lane line image according to the preset second size information and the preset position information. The real-scene ROI corresponding to the first sub-image can include the whole lane line and occupies a larger picture in the lane line image, the relative position of the first sub-image in the lane line image does not change greatly in the process of processing continuous multi-frame lane lines, and the processing speed can be improved by presetting the size information and the position information of the first sub-image. It should be understood that the position and size of the first sub-image may also be adaptively adjusted according to the recognition result of the recognized image to acquire the first sub-image.
It should be noted that, although the lane line identification method is described above by taking various embodiments as examples, those skilled in the art will understand that the present application should not be limited thereto. In fact, the user can flexibly set the lane line identification method according to personal preference and/or practical application scenes.
Fig. 5 is a block diagram illustrating a configuration of a lane line recognition apparatus according to an embodiment of the present invention. As shown in fig. 5, the lane line recognition device 500 may include:
a first obtaining module 501, configured to obtain a lane line image;
a second obtaining module 502, configured to obtain a first sub-image and a second sub-image from the lane line image, where a live-action interesting region corresponding to the first sub-image is larger than a live-action interesting region corresponding to the second sub-image;
the recognition module 503 is configured to input the first sub-image and the second sub-image into the image recognition model, and recognize the lane line in the lane line image.
As an exemplary embodiment, as shown in fig. 6, the second obtaining module 502 includes:
a first obtaining unit 601, configured to obtain a corresponding first sub-image from a current frame lane line image;
a first determining unit 602, configured to determine a lane line disappearance position in the recognized lane line image according to the recognized lane line in the recognized lane line image;
a second determining unit 603, configured to determine, according to a lane line disappearing position in the recognized lane line image, position information of a second sub-image corresponding to the current frame lane line image;
a second obtaining unit 604, configured to obtain a second sub-image corresponding to the current frame lane line image from the current frame lane line image according to the position information of the second sub-image and the predetermined first size information.
As an exemplary embodiment, the second determining unit 603 includes:
and the center determining subunit is used for taking the lane line disappearance position in the identified lane line image as the center position of the second sub-image corresponding to the current frame lane line image.
As an exemplary embodiment, the lane line image is acquired by a camera on the vehicle; the second determination unit 603 includes:
the steering angle acquisition subunit is used for acquiring the current steering angle of the steering wheel of the vehicle;
and the position determining subunit is used for determining the position information of the second sub-image corresponding to the current lane line image according to the lane line disappearing position in the recognized lane line image and the current steering wheel steering angle of the vehicle.
As an exemplary embodiment, the first acquisition unit 601 includes:
and the acquisition subunit is used for acquiring a corresponding first sub-image from the current frame lane line image according to the preset second size information and the preset position information.
As an exemplary embodiment, the position information of the second sub-image corresponding to the first frame lane line image is determined from the position information of the horizon in the first frame lane line image.
As an exemplary embodiment, the identification module 503 includes:
the adjusting unit is used for adjusting the resolution of the first sub-image and/or the second sub-image so as to enable the length or the width of the first sub-image and the second sub-image to be equal;
the splicing unit is used for splicing the first sub-image and the second sub-image into an image to be identified along the equal length or width;
and the input unit is used for inputting the image to be recognized into the image recognition model.
The functions of each module in each apparatus in the embodiments of the present invention may refer to the corresponding description in the above method, and are not described herein again.
Fig. 7 is a block diagram illustrating a configuration of a lane line recognition apparatus according to an embodiment of the present invention. As shown in fig. 7, the lane line recognition apparatus includes: a memory 910 and a processor 920, the memory 910 having stored therein computer programs operable on the processor 920. The processor 920 implements the lane line identification method in the above-described embodiment when executing the computer program. The number of the memory 910 and the processor 920 may be one or more.
The lane line identification apparatus further includes:
and a communication interface 930 for communicating with an external device to perform data interactive transmission.
If the memory 910, the processor 920 and the communication interface 930 are implemented independently, the memory 910, the processor 920 and the communication interface 930 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
Optionally, in an implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on a chip, the memory 910, the processor 920 and the communication interface 930 may complete communication with each other through an internal interface.
Embodiments of the present invention provide a computer-readable storage medium, which stores a computer program, and when the program is executed by a processor, the computer program implements the method provided in the embodiments of the present application.
The embodiment of the present application further provides a chip, where the chip includes a processor, and is configured to call and execute the instruction stored in the memory from the memory, so that the communication device in which the chip is installed executes the method provided in the embodiment of the present application.
An embodiment of the present application further provides a chip, including: the system comprises an input interface, an output interface, a processor and a memory, wherein the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the embodiment of the application.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Graphics Processing Unit (GPU), other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may include a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the present application are generated in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
While the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A lane line identification method is characterized by comprising the following steps:
acquiring a lane line image;
acquiring a first sub-image and a second sub-image from the lane line image, wherein the live-action interesting area corresponding to the first sub-image is larger than the live-action interesting area corresponding to the second sub-image;
and inputting the first sub-image and the second sub-image into an image recognition model, and recognizing the lane line in the lane line image.
2. The method of claim 1, wherein obtaining a first sub-image and a second sub-image from the lane line image comprises:
acquiring a corresponding first sub-image from a current frame lane line image;
determining a lane line disappearing position in the recognized lane line image according to the recognized lane line in the recognized lane line image;
determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the identified lane line image;
and acquiring a second sub-image corresponding to the current frame lane line image from the current frame lane line image according to the position information of the second sub-image and the preset first size information.
3. The method of claim 2, wherein determining the position information of the second sub-image corresponding to the current lane line image according to the lane line disappearance position in the recognized lane line image comprises:
and taking the lane line disappearance position in the recognized lane line image as the central position of the second sub-image corresponding to the current frame lane line image.
4. The method of claim 2, wherein the lane line image is acquired by a camera on a vehicle;
determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the recognized lane line image, wherein the position information comprises the following steps:
acquiring the current steering angle of a steering wheel of the vehicle;
and determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the recognized lane line image and the current steering angle of the steering wheel of the vehicle.
5. The method of claim 2, wherein obtaining a corresponding first sub-image from a current frame lane line image comprises:
and acquiring a corresponding first sub-image from the current frame lane line image according to the preset second size information and the preset position information.
6. The method according to claim 2, wherein the position information of the second sub-image corresponding to the first frame lane line image is determined based on the position information of the horizon in the first frame lane line image.
7. The method of any of claims 1 to 6, wherein inputting the first sub-image and the second sub-image into an image recognition model comprises:
adjusting the resolution of the first sub-image and/or the second sub-image to make the length or the width of the first sub-image and the second sub-image equal;
splicing the first sub-image and the second sub-image along the equal length or width to form an image to be identified;
and inputting the image to be recognized into an image recognition model.
8. A lane line identification apparatus, comprising:
the first acquisition module is used for acquiring a lane line image;
the second acquisition module is used for acquiring a first sub-image and a second sub-image from the lane line image, wherein the live-action interesting area corresponding to the first sub-image is larger than the live-action interesting area corresponding to the second sub-image;
and the recognition module is used for inputting the first sub-image and the second sub-image into an image recognition model and recognizing the lane line in the lane line image.
9. The apparatus of claim 8, wherein the second obtaining module comprises:
the first acquisition unit is used for acquiring a corresponding first sub-image from the current frame lane line image;
a first determining unit configured to determine a lane line disappearance position in the recognized lane line image based on the recognized lane line in the recognized lane line image;
the second determining unit is used for determining the position information of a second sub-image corresponding to the current lane line image according to the lane line disappearance position in the identified lane line image;
and the second acquisition unit is used for acquiring a second sub-image corresponding to the current frame lane line image from the current frame lane line image according to the position information of the second sub-image and the preset first size information.
10. A lane line identification apparatus, comprising: comprising a processor and a memory, said memory having stored therein instructions that are loaded and executed by the processor to implement the method of any of claims 1 to 7.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201911039994.7A 2019-10-29 2019-10-29 Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium Pending CN110796084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911039994.7A CN110796084A (en) 2019-10-29 2019-10-29 Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911039994.7A CN110796084A (en) 2019-10-29 2019-10-29 Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN110796084A true CN110796084A (en) 2020-02-14

Family

ID=69442029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911039994.7A Pending CN110796084A (en) 2019-10-29 2019-10-29 Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110796084A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505747A (en) * 2021-07-27 2021-10-15 浙江大华技术股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device
CN113807236A (en) * 2021-09-15 2021-12-17 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for lane line detection

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657832A (en) * 2017-11-15 2018-02-02 吉林大学 A kind of parking stall bootstrap technique and system
CN108932472A (en) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 A kind of automatic Pilot running region method of discrimination based on lane detection
CN109389026A (en) * 2017-08-09 2019-02-26 三星电子株式会社 Lane detection method and equipment
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN109886175A (en) * 2019-02-13 2019-06-14 合肥思艾汽车科技有限公司 A kind of method for detecting lane lines that straight line is combined with circular arc
CN110088766A (en) * 2019-01-14 2019-08-02 京东方科技集团股份有限公司 Lane detection method, Lane detection device and non-volatile memory medium
CN110147698A (en) * 2018-02-13 2019-08-20 Kpit技术有限责任公司 System and method for lane detection
CN110164163A (en) * 2018-02-13 2019-08-23 福特全球技术公司 The method and apparatus determined convenient for environment visibility

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389026A (en) * 2017-08-09 2019-02-26 三星电子株式会社 Lane detection method and equipment
CN107657832A (en) * 2017-11-15 2018-02-02 吉林大学 A kind of parking stall bootstrap technique and system
CN110147698A (en) * 2018-02-13 2019-08-20 Kpit技术有限责任公司 System and method for lane detection
CN110164163A (en) * 2018-02-13 2019-08-23 福特全球技术公司 The method and apparatus determined convenient for environment visibility
CN108932472A (en) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 A kind of automatic Pilot running region method of discrimination based on lane detection
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN110088766A (en) * 2019-01-14 2019-08-02 京东方科技集团股份有限公司 Lane detection method, Lane detection device and non-volatile memory medium
CN109886175A (en) * 2019-02-13 2019-06-14 合肥思艾汽车科技有限公司 A kind of method for detecting lane lines that straight line is combined with circular arc

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505747A (en) * 2021-07-27 2021-10-15 浙江大华技术股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device
CN113807236A (en) * 2021-09-15 2021-12-17 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for lane line detection
CN113807236B (en) * 2021-09-15 2024-05-17 北京百度网讯科技有限公司 Method, device, equipment, storage medium and program product for lane line detection

Similar Documents

Publication Publication Date Title
CN107409166B (en) Automatic generation of panning shots
CN107945105B (en) Background blurring processing method, device and equipment
WO2016171050A1 (en) Image processing device
US11126875B2 (en) Method and device of multi-focal sensing of an obstacle and non-volatile computer-readable storage medium
CN110796201A (en) Method for correcting label frame, electronic equipment and storage medium
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN110796084A (en) Lane line recognition method, lane line recognition device, lane line recognition equipment and computer-readable storage medium
KR20220136196A (en) Image processing device, image processing method, moving device, and storage medium
CN112184827B (en) Method and device for calibrating multiple cameras
CN113610884B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110832851B (en) Image processing apparatus, image conversion method, and program
CN113240582B (en) Image stitching method and device
CN112634628B (en) Vehicle speed determination method, terminal and storage medium
CN112634298B (en) Image processing method and device, storage medium and terminal
US11107197B2 (en) Apparatus for processing image blurring and method thereof
US11227166B2 (en) Method and device for evaluating images, operating assistance method, and operating device
CN113850881A (en) Image generation method, device, equipment and readable storage medium
JP6266340B2 (en) Lane identification device and lane identification method
KR20180068022A (en) Apparatus for automatic calibration of stereo camera image, system having the same and method thereof
CN115131273A (en) Information processing method, ranging method and device
CN107977644B (en) Image data processing method and device based on image acquisition equipment and computing equipment
CN113395434A (en) Preview image blurring method, storage medium and terminal equipment
CN113724129A (en) Image blurring method, storage medium and terminal device
KR20190092155A (en) Method for a video stabilization for real-time optical character recognition (OCR)
US11830234B2 (en) Method and apparatus of processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214