CN114255597A - Emergency lane occupation behavior detection method and device, electronic equipment and storage medium - Google Patents

Emergency lane occupation behavior detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114255597A
CN114255597A CN202011007778.7A CN202011007778A CN114255597A CN 114255597 A CN114255597 A CN 114255597A CN 202011007778 A CN202011007778 A CN 202011007778A CN 114255597 A CN114255597 A CN 114255597A
Authority
CN
China
Prior art keywords
lane
endpoint
emergency lane
coordinate
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011007778.7A
Other languages
Chinese (zh)
Other versions
CN114255597B (en
Inventor
郝尚荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN202011007778.7A priority Critical patent/CN114255597B/en
Publication of CN114255597A publication Critical patent/CN114255597A/en
Application granted granted Critical
Publication of CN114255597B publication Critical patent/CN114255597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an emergency lane occupation behavior detection method and device, electronic equipment and a computer readable storage medium. The emergency lane occupation behavior detection method comprises the following steps: acquiring a state image of a target road; fitting according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of a target vehicle in contact with the road surface of the target road; acquiring a first coordinate and a second coordinate of a first end point and a second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system; and determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate and the curve data. The emergency lane occupation behavior detection method and the emergency lane occupation behavior detection device can automatically detect the emergency lane occupation behavior, play a role in supervision on the emergency lane occupation behavior to a certain extent, and improve the recognition rate of the emergency lane occupation behavior.

Description

Emergency lane occupation behavior detection method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of intelligent traffic, in particular to an emergency lane occupation behavior detection method and device, electronic equipment and a computer readable storage medium.
Background
The emergency lane is mainly constructed on the two sides of a city loop, an express way and an expressway, is mainly used for facilitating the passing of rescue vehicles and rescue personnel under emergency conditions (traffic accidents, rescuing sudden sick and wounded, rescuing and the like), is called as a 'life passage', indirectly threatens the life safety of other people by illegally occupying the emergency lane, and is seriously damaged.
The following modes are mainly adopted in the prior art to detect the behaviors of occupying emergency lanes: the method comprises the following steps that a camera is installed on a guardrail upright post of an emergency lane, and the camera detects vehicles entering the emergency lane according to a preset emergency lane area.
However, in practical application, the inventor finds that the existing detection method for emergency lane occupancy has a low recognition rate because the camera mounted on the upright is easy to avoid.
Disclosure of Invention
The application provides an emergency lane occupation behavior detection method, an emergency lane occupation behavior detection device, electronic equipment and a computer readable storage medium, and aims to solve the problem that an existing emergency lane occupation behavior detection method is low in recognition rate.
In a first aspect, the present application provides a method for detecting emergency lane occupancy behavior, where the method includes:
acquiring a state image of a target road, wherein the target road is a road provided with an emergency lane;
fitting according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of a target vehicle in contact with the road surface of the target road, wherein the target vehicle is a vehicle on the target road;
acquiring a first coordinate and a second coordinate of a first end point and a second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system, wherein the first end point and the second end point are two end points of the target line segment;
and determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate and the curve data.
In a second aspect, the application provides an emergency lane occupation behavior detection device, the emergency lane occupation behavior detection device includes:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a state image of a target road, and the target road is a road provided with an emergency lane;
the processing unit is used for performing fitting processing according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of a target vehicle in contact with the road surface of the target road, wherein the target vehicle is a vehicle on the target road;
the second acquisition unit is used for acquiring a first coordinate and a second coordinate of a first end point and a second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system, wherein the first end point and the second end point are two end points of the target line segment;
and the judging unit is used for determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate and the curve data.
In a third aspect, the present application further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program, and the processor executes any one of the steps in the emergency lane occupancy behavior detection method provided in the present application when calling the computer program in the memory.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, the computer program being loaded by a processor to execute the steps in the emergency lane occupancy behavior detection method.
The lane line of the emergency lane is fitted according to the state image (of the target road provided with the emergency lane); performing prediction processing according to the state image to obtain a target line segment (appearing in the target road) of the target vehicle in contact with the road surface of the target road; the relationship between the coordinates of the two end points of the target line segment (i.e., the first coordinate of the first end point, the second coordinate of the second end point), and the curve data of the lane line is determined to determine whether the target vehicle occupies the emergency lane. On one hand, the position of the target line segment reflects the position of the target vehicle to a certain extent, and whether the two end points of the target line segment are both in the region of the emergency lane can be determined through the relationship between the coordinates of the two end points of the target line segment and the curve data of the lane line, so that whether the target vehicle occupies the emergency lane can be determined. On the other hand, because the detection is carried out based on the state image containing the target road, the person who maliciously occupies the emergency lane is difficult to avoid, and therefore, the recognition rate of the emergency lane occupation behavior is improved to a certain extent. Therefore, the emergency lane occupation behavior can be automatically detected, and the emergency lane occupation behavior is supervised to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an emergency lane occupancy behavior detection system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an emergency lane occupancy behavior detection method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of an image containing a target road provided in an embodiment of the present application;
FIG. 4 is a schematic view of a target road scenario provided in an embodiment of the present application;
FIG. 5 is a schematic view of a scene of a reference coordinate system corresponding to each pixel point in a state image provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a scenario of a target line segment provided in an embodiment of the present application;
FIG. 7 is a flowchart illustrating an embodiment of step S40 provided in embodiments of the present application;
fig. 8 is a schematic structural diagram of an embodiment of the emergency lane occupancy behavior detection apparatus provided in the embodiment of the present application;
fig. 9 is a schematic structural diagram of an embodiment of an electronic device provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the embodiments of the present application, it should be understood that the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known processes have not been described in detail so as not to obscure the description of the embodiments of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed in the embodiments herein.
The embodiment of the application provides an emergency lane occupation behavior detection method and device, electronic equipment and a computer readable storage medium. The emergency lane occupation behavior detection device can be integrated in electronic equipment, and the electronic equipment can be a server or a terminal and the like.
First, before describing the embodiments of the present application, the related contents of the embodiments of the present application with respect to the application context will be described.
In the real-life traffic, the emergency lane occupying the high-grade road is a very dangerous traffic violation, and the risk of traffic accidents is increased.
For example, on an expressway, an emergency lane is a public emergency resource, but a few drivers occupy the lane when not needed, so that the traffic trip is influenced slightly, life falling is brought seriously, and all road sections cannot be tracked by a traffic police at high speed in real time and in the whole process.
Based on the above defects in the prior art, the embodiment of the application provides an emergency lane occupation behavior detection method, which overcomes the defects in the prior art to at least a certain extent.
The execution main body of the emergency lane occupancy behavior detection method according to the embodiment of the present application may be the emergency lane occupancy behavior detection device provided in the embodiment of the present application, or different types of electronic devices such as a server device, a physical host, or a User Equipment (UE) that integrates the emergency lane occupancy behavior detection device, where the emergency lane occupancy behavior detection device may be implemented in a hardware or software manner, and the UE may specifically be a terminal device such as a smart phone, a tablet computer, a laptop computer, a palmtop computer, a desktop computer, or a Personal Digital Assistant (PDA).
The electronic equipment can adopt a working mode of independent operation or a working mode of equipment clustering, and can automatically detect the behavior of the emergency lane occupation by applying the emergency lane occupation behavior detection method provided by the embodiment of the application, play a role in supervision on the emergency lane occupation behavior to a certain extent, and improve the recognition rate of the emergency lane occupation behavior.
Referring to fig. 1, fig. 1 is a scene schematic diagram of an emergency lane occupancy behavior detection system provided in an embodiment of the present application. The emergency lane occupation behavior detection system may include an electronic device 100, and an emergency lane occupation behavior detection device is integrated in the electronic device 100. For example, the electronic device may acquire a status image of the target road; fitting according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of the target vehicle in contact with the road surface of the target road; acquiring a first coordinate and a second coordinate of the first end point and the second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system; and determining whether the target vehicle occupies the emergency lane or not according to the first coordinate, the second coordinate and the curve data.
In addition, as shown in fig. 1, the emergency lane occupancy behavior detection system may further include a memory 200 for storing data, such as image data and video data.
It should be noted that the scene schematic diagram of the emergency lane occupancy behavior detection system shown in fig. 1 is only an example, and the emergency lane occupancy behavior detection system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
In the following, an emergency lane occupancy behavior detection method provided in an embodiment of the present application is described, in which an electronic device is used as an execution subject, and for simplicity and convenience of description, the execution subject will be omitted in subsequent embodiments of the method.
Referring to fig. 2, fig. 2 is a schematic flowchart of an emergency lane occupancy behavior detection method provided in the embodiment of the present application. It should be noted that, although a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. The emergency lane occupation behavior detection method comprises the steps of S10-S40, wherein:
and S10, acquiring the state image of the target road.
The target road refers to a road provided with an emergency lane, such as an expressway, an urban loop, an expressway and the like. The state image is an image including a target road. Referring to fig. 3, fig. 3 is a schematic diagram of an image including a target road provided in an embodiment of the present application.
The camera is mainly installed on a guardrail upright post of an emergency lane in the prior art, and the camera detects vehicles entering the emergency lane according to a preset emergency lane area, so that whether the emergency lane is occupied or not is determined. However, since the range of view that can be covered by the cameras is limited, a large number of cameras need to be laid out, resulting in a large hardware cost.
In some embodiments of the present application, a camera is mounted on a road patrol vehicle to capture whether other vehicles occupy an emergency lane. The installed camera is used for collecting a state image containing the target road, and whether the emergency lane is occupied or not is further detected based on the state image containing the target road.
In some embodiments of the present application, a camera may be further installed on a specific vehicle (such as an express delivery vehicle, a private car, etc.) that passes through a target road and has a frequency greater than a preset frequency threshold value, so as to capture whether other vehicles occupy an emergency lane. The method comprises the steps of installing a camera on a vehicle passing by a high frequency for collecting a state image containing a target road, and further detecting whether an emergency lane is occupied or not based on the state image containing the target road. On the one hand, the hardware cost is reduced while the detection precision of the emergency lane occupation behavior is considered. On the other hand, the camera is arranged on the specific vehicle passing by at a high frequency, and a special vehicle is not required to be additionally adopted for patrol, so that the use cost of the vehicle is reduced, and the road congestion can be relieved to a certain extent.
It can be understood that the camera (i.e. the camera for acquiring the status image) adopted in the embodiment of the present application is also applicable to a camera fixedly arranged on the target road, such as a camera fixedly arranged by a pillar.
Specifically, in practical application, the electronic device applying the emergency lane occupancy behavior detection method provided by the embodiment of the present application may directly include a camera (the camera is mainly used for collecting images including a target road) on hardware, locally store images obtained by shooting with the camera, and directly read the images in the electronic device; or the electronic equipment can also establish network connection with the camera and acquire an image obtained by the camera on line from the camera according to the network connection; alternatively, the electronic device may also read the image captured by the camera from a related storage medium storing the image captured by the camera, and the specific acquisition mode is not limited herein.
The camera can shoot images according to a preset shooting mode, for example, shooting height, shooting direction or shooting distance can be set, the specific shooting mode can be adjusted according to the camera, and the camera is not limited specifically. The multi-frame images shot by the camera can form a video through a time line.
For better understanding of the embodiments of the present application, some road scenarios to which the embodiments of the present application are applicable will be described below by way of example. Referring to fig. 4, fig. 4 is a scene schematic diagram of a target road provided in the embodiment of the present application.
In general, the driving habit is right-side driving, and the emergency lane is disposed at the edge of the road, that is, the emergency lane is disposed at the rightmost side in the driving direction of the road, as shown in fig. 4 (a), for example, china.
However, since in some countries or regions, the driving habit is to drive left. Correspondingly, the emergency lane is disposed at the edge of the road, i.e., the emergency lane is disposed at the leftmost side in the road traveling direction, as shown in fig. 4 (b).
Further, as the usage habit changes, in the case where the driving habit is driving to the right, the emergency lane may be disposed at the leftmost side in the road driving direction; or, in the case where the driving habit is to drive on the left, the emergency lane may be disposed on the rightmost side in the road driving direction, as shown in fig. 4 (c) and (d). Fig. 4 (c) shows a case where the driving habit is right driving and the emergency lane is disposed on the leftmost side in the road driving direction. Fig. 4 (d) shows a case where the driving habit is left driving and the emergency lane is disposed on the rightmost side in the road driving direction.
Therefore, the methods provided by the embodiments of the present application can be based on the following 4 preconditions, respectively.
Precondition 1: the driving habit is to drive on the right, and the rightmost lane in the driving direction of the road is an emergency lane. Wherein, the right side of the emergency lane driving direction is the edge of the road.
Precondition 2: the driving habit is to drive by the right, and the leftmost lane in the driving direction of the road is an emergency lane. Wherein, the right side of the emergency lane driving direction is the edge of the road.
Precondition 3: the driving habit is to drive left, and the leftmost lane in the driving direction of the road is an emergency lane. Wherein, the left side of the emergency lane driving direction is the edge of the road.
Precondition case 4: the driving habit is to drive left, and the rightmost lane in the driving direction of the road is an emergency lane. Wherein, the left side of the emergency lane driving direction is the edge of the road.
S20, performing fitting processing according to the state image to obtain a lane line of the emergency lane; and carrying out prediction processing according to the state image to obtain a target line segment of the target vehicle in contact with the road surface of the target road.
The state image includes a target vehicle, which is a vehicle on a target road.
Herein, the "lane line" refers to a lane line of an emergency lane, if not otherwise specified.
The determination of "the lane line of the emergency lane" and "the target line segment where the target vehicle is in contact with the target road surface" will be described below, respectively.
In a first aspect, a lane line of an emergency lane.
In some embodiments, the step S20 of "performing fitting processing according to the state image to obtain the lane line of the emergency lane" may be implemented by an existing lane line detection algorithm. For example, the emergency lane is to the right of the target road travel directionAnd the lane line separating the emergency lane and the common lane is a white solid line; and (3) regressing the pixel point position of the right white solid line (namely obtaining the lane line of the emergency lane) by using a LineNet algorithm (wherein the LineNet algorithm is a lane line detection algorithm). And fitting a curve in the reference coordinate system according to the regressed pixel point positions, so as to obtain the curve data of the lane line mentioned in step S30 (e.g. y ═ ax)3+bx2+cx+d)。
In a second aspect, a target line segment where a target vehicle contacts a target road surface.
In some embodiments, the step S20 of "performing prediction processing according to the state image to obtain a target line segment where the target vehicle contacts the target road surface" may specifically include: performing feature extraction processing according to the state image to obtain image feature information of the state image; performing prediction processing according to the characteristic information to obtain the position information of the target vehicle; and carrying out regression processing according to the position information to obtain the target line segment.
The image feature information refers to an image space feature obtained by performing feature extraction processing on the state image.
The position information is a detection frame (hereinafter, simply referred to as a detection frame) of the position where the target vehicle is located, which is obtained by prediction from the state image.
The target line segment is a line segment obtained by prediction and in contact with the target road by the target vehicle. The target line segment can be represented in various forms, for example, the target line segment can be a line connecting the left front wheel and the right rear wheel of the target vehicle along the ground (for example, represented by a line segment formed by pixels at the upper left corner of the detection frame and pixels at the lower right corner of the detection frame), a line connecting the right front wheel and the left rear wheel of the target vehicle along the ground (for example, represented by a line segment formed by pixels at the upper right corner of the detection frame and pixels at the lower left corner of the detection frame), a line connecting the left front wheel and the left rear wheel of the target vehicle along the ground (for example, represented by a line segment formed by pixels at the upper left corner of the detection frame and pixels at the lower left corner of the detection frame), a line connecting the right front wheel and the right rear wheel of the target vehicle along the ground (for example, represented by a line segment formed by pixels at the upper right corner of the detection frame and pixels at the lower right corner of the detection frame), or a connection line between a certain point position of the head of the target vehicle and a certain point position of the tail of the target vehicle along the ground.
Specifically, the target line segment may be detected by a trained position detection network (herein, the position detection network refers to the trained position detection network, unless otherwise specified). The trained position detection network can be used for detecting the image and determining the area of the target vehicle in the state image; and regressing a target line segment of the target vehicle, which is in contact with the road surface of the target road, according to the region of the target vehicle.
For example, first, a preset position detection network is trained based on a training data set (including a plurality of images containing vehicles), so that the trained position detection network learns the characteristics of the vehicles, thereby obtaining a trained position detection network (suitable for determining the area where the vehicle exists in the images according to the detection processing performed on the images, and regressing the target line segment where the target vehicle contacts the target road surface according to the area where the vehicle exists in the images). The preset location detection network may be an open source network model that can be used for detecting tasks, such as an OverFeat network, a YOLOv3 network, an SSD network, and a RetinaNet network. Specifically, an open source network (available for detection tasks) whose model parameters are default values may be adopted as the preset location detection network. The training process of the location detection network may specifically refer to the following steps a 1-a 6, which are not described herein again.
Then, inputting the state image into the trained position detection network to call the trained position detection network to perform feature extraction processing according to the state image to obtain image feature information of the state image; calling a position detection network, and performing prediction processing according to the characteristic information of the state image to obtain the position information of the target vehicle (namely the area of the target vehicle in the state image); and calling a position detection network, and performing regression processing according to the position information of the target vehicle to obtain a target line segment.
S30, acquiring a first coordinate and a second coordinate of the first end point and the second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system.
And the first endpoint and the second endpoint are two endpoints of the target line segment. The first coordinate is a coordinate value of the first endpoint in a preset reference coordinate system. The second coordinate is a coordinate value of the second endpoint in a preset reference coordinate system.
The curve data refers to curve representation of the lane line of the emergency lane under a reference coordinate system.
For better understanding of the embodiments of the present application, the concept of "preset reference coordinate system" mentioned in the embodiments of the present application is described below.
The preset reference coordinate system (hereinafter referred to as a reference coordinate system) is a coordinate system based on which coordinate values of each pixel in the state image are determined. The reference coordinate system may be a two-dimensional coordinate system (i.e. a planar coordinate system including x-axis, y-axis) or a three-dimensional coordinate system including x-axis, y-axis, z-axis).
When each pixel point in the state image is represented by a camera coordinate value, the reference coordinate system is the camera coordinate system of the camera for shooting the state image.
Referring to fig. 5, fig. 5 is a scene schematic diagram of a reference coordinate system corresponding to each pixel point in a state image provided in an embodiment of the present application. In fig. 5, w represents the width of the state image, and h represents the height of the state image. In order to make each pixel point in the state image have a certain regularity so as to facilitate subsequent data processing, in the embodiment of the present application, a coordinate system in which one coordinate axis is parallel to a straight line where the width of the state image is located and the other coordinate axis is parallel to a straight line where the height of the state image is located is considered as a reference coordinate system.
For example, taking the shape of the state image as a rectangle as an example, in this embodiment of the application, a reference coordinate system may be established by taking a lower left-corner pixel point of the state image as an origin of the coordinate system, taking a straight line (i.e., a straight line parallel to the width of the state image) where the lower left-corner pixel point and the lower right-corner pixel point of the state image are located as an x-axis, taking a straight line (i.e., a straight line parallel to the height of the state image) where the lower left-corner pixel point and the upper left-corner pixel point of the state image are located as a y-axis, taking a lower left-corner pixel point extending along a lower right-corner pixel point direction as an x-axis direction, and taking a lower left-corner pixel point extending along an upper left-corner pixel point direction as a y-axis direction, as shown in fig. 5.
Then, according to the coordinate transformation relationship between the reference coordinate system and the camera coordinate system shown in fig. 5, the camera coordinate value of the lower left corner pixel point of the state image can be transformed into (0, 0), the camera coordinate value of the lower right corner pixel point can be transformed into (w, 0), the camera coordinate value of the upper left corner pixel point can be transformed into (0, m), and the camera coordinate value of the upper right corner pixel point can be transformed into (w, h), where w represents the coordinate difference value between the lower left corner pixel point and the lower right corner pixel point in the x-axis direction (i.e., the width of the state image), and h represents the coordinate difference value between the lower left corner pixel point and the upper left corner pixel point in the y-axis direction (i.e., the height of the state image).
It is understood that the relationship between the reference coordinate system and the state image shown in fig. 5 is only an example, and may be specifically adjusted according to the actual situation. For example, the reference coordinate system may be established in any form, such as using the upper-left pixel point of the state image as the origin of the coordinate system, using the lower-left pixel point extending along the lower-right pixel point as the y-axis direction, using the straight line of the lower-left pixel point and the lower-right pixel point of the state image as the y-axis, or using the straight line of the lower-left pixel point and the upper-left pixel point of the state image as the z-axis.
The camera coordinate values of the pixel points in the state graph can be further converted according to actual conditions, so that the pixel points are represented by adopting coordinates in different forms, and subsequent calculation is facilitated. In addition, in the subsequent calculation, the camera coordinate value of each pixel point is not converted, and the camera coordinate value is directly adopted to represent the coordinate position of each pixel point.
For convenience of understanding, in the following of the embodiments of the present application, the camera coordinate values of the pixels in the state graph are converted into coordinate values in the reference coordinate system as shown in fig. 5, and the coordinate values of the pixels in the state graph in the reference coordinate system as shown in fig. 5 are taken as an example for explanation.
Referring to fig. 6, fig. 6 is a schematic view of a scene of a target line segment provided in an embodiment of the present application. In the embodiment of the present application, the first endpoint and the second endpoint are only used for distinguished naming so as to describe more clearly the positional relationship between the target vehicle and the lane line of the emergency lane, the first endpoint may be any one endpoint in the target line segment, and the second endpoint is another endpoint different from the first endpoint, as shown in fig. 6(a) and (b).
And S40, determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate and the curve data.
According to the first coordinate, the second coordinate and the curve data, the position relation between the target line segment and the lane line of the emergency lane can be determined, and then whether the target line segment (reflecting the position of the target vehicle) is in the area of the emergency lane can be determined. When the target line segment is located in the area of the emergency lane, it can be determined that the target vehicle occupies the emergency lane; otherwise, it may be determined that the target vehicle does not occupy the emergency lane.
As can be seen from the above, the lane line of the emergency lane is fitted by the state image (of the target road provided with the emergency lane); performing prediction processing according to the state image to obtain a target line segment (appearing in the target road) of the target vehicle in contact with the road surface of the target road; the relationship between the coordinates of the two end points of the target line segment (i.e., the first coordinate of the first end point, the second coordinate of the second end point), and the curve data of the lane line is determined to determine whether the target vehicle occupies the emergency lane. On one hand, the position of the target line segment reflects the position of the target vehicle to a certain extent, and whether the two end points of the target line segment are both in the region of the emergency lane can be determined through the relationship between the coordinates of the two end points of the target line segment and the curve data of the lane line, so that whether the target vehicle occupies the emergency lane can be determined. On the other hand, because the detection is carried out based on the state image containing the target road, the person who maliciously occupies the emergency lane is difficult to avoid, and therefore, the recognition rate of the emergency lane occupation behavior is improved to a certain extent. Therefore, the emergency lane occupation behavior can be automatically detected, and the emergency lane occupation behavior is supervised to a certain extent.
In some embodiments, the reference coordinate system is a camera coordinate system. At this time, the step S30 is specifically implemented as follows: when the state image of the target road is obtained, the camera coordinate value of each pixel point in the state image is directly used as the coordinate of each pixel point in the state image. In the first aspect, the coordinate value of the first endpoint in the camera coordinate system may be used as the first coordinate of the first endpoint in the preset reference coordinate system. In a second aspect, the coordinate value of the second endpoint in the camera coordinate system is used as a second coordinate of the second endpoint in the preset reference coordinate system. In a third aspect, curve fitting is performed according to coordinate values of all pixel points where the lane line (of the emergency lane) is located in the camera coordinate system to obtain curve data of the lane line (of the emergency lane) in the reference coordinate system, for example, the curve of the lane line of the emergency lane is expressed as y-ax3+bx2+cx+d。
In some embodiments, the reference coordinate system is not a camera coordinate system. At this time, the step S30 is specifically implemented as follows: when the state image of the target road is obtained, firstly, the camera coordinate values of all the pixel points in the state image are converted according to the conversion relation between the reference coordinate system and the camera coordinate system, and the coordinate values of all the pixel points in the state image under the reference coordinate system are obtained. Then, in the first aspect, the first coordinates of the first endpoint in the preset reference coordinate system may be directly obtained. In the second aspect, the second coordinate of the second endpoint in the preset reference coordinate system may be directly obtained. In a third aspect, curve fitting is performed according to coordinate values of all pixel points where the lane line (of the emergency lane) is located in the reference coordinate system to obtain curve data of the lane line (of the emergency lane) in the reference coordinate system, for example, the curve of the lane line of the emergency lane is expressed as y-ax4+bx3+cx2+dx+e。
In some embodiments, the position detection network may be obtained by training through the following steps a1 to a5, or training through the following steps a1 to a4 and a6, where:
and A1, acquiring a training data set.
Wherein the training dataset comprises a plurality of sample images. Each sample image contains the same or different sample vehicles, and the sample images are marked with the actual position information of the sample vehicles and the information of the actual line segments of the sample vehicles contacting with the road surface. The actual line segment refers to a line segment where the sample vehicle is in contact with the road surface. The information of the real line segment includes coordinates (x) of two end points of the real line segment1,y1)、(x2,y2) Length of actual line segment len, midpoint coordinate of actual line segment (x)mid,ymid) The tangent θ of the triangle formed by the real line segments in the y-axis direction and the x-axis direction.
And A2, performing feature extraction processing on the sample image to obtain the image features of the sample image.
A3, performing prediction processing according to the image characteristics of the sample image to obtain the predicted position information of the sample vehicle. The predicted position information refers to position information of a sample vehicle obtained by prediction according to a sample image in a model training stage.
And A4, performing regression processing according to the predicted position information of the sample vehicle to obtain a line segment prediction result of the contact between the sample vehicle and the road surface.
And A5, determining a training total loss value of the preset position detection network according to the line segment prediction result and the information of the actual line segment. And updating the model parameters of the preset position detection network according to the total training loss value until the preset position detection network converges to obtain the trained position detection network.
Wherein the line segment prediction result may include two end points (x) of the predicted line segment where the sample vehicle is in contact with the road surface1’,y1’)、(x2’,y2') and from two end points (x)1’,y1’)、(x2’,y2') the midpoint (x) of the predicted line segment determinedmid’,ymid') predict line segments in the y-axis direction and the x-axis directionThe tangent value θ' of the formed triangle.
For example, the total loss value of training of the preset location detection network may be determined according to the following formula (1), where formula (1) is as follows:
Figure BDA0002696546470000131
wherein Loss represents the total Loss of training value, x1One of the end points of the line segment is shown as the abscissa, x2The abscissa, y, of the other end of the line segment1One of the end points of the line segment is shown as ordinate, y2Another end-point ordinate, x, representing a line segmentmidAbscissa, y, representing the midpoint of the line segmentmidRepresents the ordinate of the midpoint of the line segment, theta represents the tangent of the triangle formed by the line segment in the y-axis direction and the x-axis direction, n represents the number of actual line segments labeled in the training data set, and x represents the number of actual line segments labeled in the training data setiNotation x representing the ith (actual) line segment1、y1、x2、y2、xmid、ymid、len、θ,xi preThe predicted value x representing the ith (predicted) line segment1’、y1’、x2’、y2’、xmid’、ymid’、len’、θ’。
A6, determining a segment regression loss value of a preset position detection network according to the segment prediction result and the information of an actual segment; determining a position detection loss value of a preset position detection network according to the predicted position information and the actual position information of the sample vehicle; and taking the regression loss value and the position detection loss value of the line segment as a preset training total loss value of the position detection network. And updating the model parameters of the preset position detection network according to the total training loss value until the preset position detection network converges to obtain the trained position detection network.
Furthermore, in order to improve the detection precision of the emergency lane occupation behavior and reduce unnecessary data processing amount. In some embodiments of the present application, after the position information of the target vehicle is detected, it is further detected whether the target vehicle is closer to an emergency lane area; when the distance between the target vehicle and the emergency lane area is far, the target vehicle can be directly judged not to occupy the emergency lane, and further data processing is not needed for determination; when the distance between the target vehicle and the emergency lane area is short, data processing such as regression of a target line segment is further carried out to determine whether the target vehicle occupies the emergency lane.
In order to detect whether the target vehicle is relatively close to the emergency lane area, in this embodiment of the present application, after the step of "performing prediction processing according to the feature information to obtain the position information of the target vehicle", and before the step of "performing regression processing according to the position information to obtain the target line segment", the method further includes: and carrying out classification processing according to the position information to obtain the position label of the target vehicle.
For example, in the learning and training phase of the location detection network, the preset location detection network further learns the feature relationship between the location information of the vehicle and the location tag of the vehicle. And the trained position detection network can be used for carrying out classification processing according to the position information to obtain the position label of the target vehicle.
In an actual scene, when the position information of the target vehicle is determined, a position detection network can be directly called to perform classification regression prediction according to the position information of the target vehicle, so as to obtain a position label of the target vehicle.
In this case, the step S20 of "performing prediction processing based on the state image to obtain the target line segment where the target vehicle contacts the target road surface" may specifically include: performing feature extraction processing according to the state image to obtain image feature information of the state image; performing prediction processing according to the characteristic information to obtain the position information of the target vehicle; classifying and processing according to the position information to obtain a position label of the target vehicle; and when the position label is the same as a preset position label, performing regression processing according to the position information to obtain the target line segment.
For simplicity of description, the image feature information, the position information of the target vehicle, and the determination of the target line segment may refer to the above description and examples, which are not repeated herein.
For example, the location tags may include 3 categories "left", "center", "right". In the case of the above-described precondition cases 1 and 4, the preset position tag may be "right" in order to preliminarily detect whether the target vehicle is present in the emergency lane area.
And when the position label of the target vehicle is right, performing regression processing according to the position information to obtain a target line segment.
As another example, the location tags may include 3 categories "left", "center", "right". In the case of the above-described precondition cases 2 and 3, the preset position tag may be "left" in order to preliminarily detect whether the target vehicle is present in the emergency lane area.
And when the position label of the target vehicle is left, performing regression processing according to the position information to obtain a target line segment.
The preset position tags are only examples, and it can be understood that the preset position tags may be set according to specific scene requirements, and specific categories of the preset position tags are not limited herein.
As can be seen from the above, by performing classification processing on the position information of the target vehicle, the position tags (such as "left", "center", and "right") of the target vehicle at specific positions of the target road are determined, so as to determine whether the target vehicle is relatively close to the emergency lane area; when the target vehicle is determined to be closer to the emergency lane area, data processing such as regression of a target line segment is further performed to determine whether the target vehicle occupies the emergency lane. On the one hand, the detection precision of the emergency lane occupation behavior is improved. On the other hand, analysis of vehicles not close to the emergency lane area is reduced, and unnecessary data processing is further reduced.
Further, in order to improve the accuracy of the target line segment. In some embodiments of the present application, after the target line segment is regressed, the distance error (fraction) between the center point, the tangent value, and the pixel point of the length of the target line segment is further detectedIs designated as errormid、errorθ、errorlen). And when the distance error of the two pixel points is smaller than the empirical value, determining that the regressed target line segment is effective, and continuously executing the subsequent data processing steps by adopting the target line segment. And when the distance error of at least two pixel points is larger than or equal to the empirical value, the target line segment is considered invalid, and the target line segment is discarded and is not subjected to further data processing.
For example, it is determined whether or not the following equations (2), (3), and (4) are satisfied, respectively, and when at least two of the equations (2), (3), and (4) are satisfied, it is determined that the regressed target line segment is valid. Wherein, the formulas (2), (3) and (4) are respectively:
errormid<empirical value 1 equation (2)
errorθ<Empirical value 2 formula (3)
errorlen<Empirical value 3 formula (4)
Wherein the pixel point distance error of the center point, tangent value and length of the target line segmentmid、errorθ、errorlenDetermined by the following equations (5), (6), (7), respectively:
Figure BDA0002696546470000161
where i represents the two endpoints (x) of the predicted line segment predicted from the model1’,y1’)、(x2’,y2') center point of the calculated predicted line segment, imidAnd representing the central point of the predicted line segment predicted by the model.
errorθ=(i-iθ) Formula (6)
Where i represents the two endpoints (x) of the predicted line segment predicted from the model1’,y1’)、(x2’,y2') calculating tangent values of triangles formed by the predicted line segments in the y-axis direction and the x-axis direction; i.e. iθAnd the tangent values of triangles formed by the predicted line segments predicted by the model in the y-axis direction and the x-axis direction are represented.
errorlen=(i-ilen) Formula (7)
Where i represents the two endpoints (x) of the predicted line segment predicted from the model1’,y1’)、(x2’,y2') calculated length of predicted line segment, ilenRepresenting the length of the predicted line segment predicted by the model.
Referring to fig. 7, fig. 7 is a flowchart illustrating an embodiment of step S40 provided in the present embodiment. In some embodiments of the present application, the step S40 may specifically include the following steps S41 to S42, wherein:
and S41, determining the position relation between the first end point and the second end point and the lane line according to the first coordinate, the second coordinate and the curve data.
The position relationship refers to relative positions between the first end point and the lane line and between the second end point and the lane line. The position relationship may include the following several cases:
positional relationship 1: the first endpoint and the second endpoint are both located on the right side of the lane line of the emergency lane.
Positional relationship 2: the first endpoint and the second endpoint are both located on the left side of the lane line of the emergency lane.
Positional relationship 3: the first endpoint and the second endpoint are both located on a lane line of the emergency lane.
Positional relationship 4: the first end is on the left side of the lane line of the emergency lane (or on the lane line of the emergency lane) and the second end is on the right side of the lane line of the emergency lane.
Positional relationship 5: the first end is on the right side of the lane line of the emergency lane (or on the lane line of the emergency lane) and the second end is on the left side of the lane line of the emergency lane.
Positional relationship 6: the first end is on the left side of the lane line of the emergency lane and the second end is on the right side of the lane line of the emergency lane (or on the lane line of the emergency lane).
Positional relationship 7: the first end is on the right side of the lane line of the emergency lane and the second end is on the left side of the lane line of the emergency lane (or on the lane line of the emergency lane).
Here, cases other than the positional relationships 1 and 2 (including the positional relationships 2, 3, 4, 5, 6, and 7) are hereinafter simply referred to as other positional relationships.
And S42, performing discrimination processing according to the position relation, and determining whether the target vehicle occupies an emergency lane.
In some embodiments, the emergency lane occupancy behavior detection method may be applied to the above-described precondition 1 and precondition 4. In this case, step S42 may specifically include: if the first endpoint and the second endpoint are both located on the right side of the lane line of the emergency lane, determining that the target vehicle occupies the emergency lane; and if the first endpoint and the second endpoint are both positioned on the left side of the lane line of the emergency lane, determining that the target vehicle does not occupy the emergency lane. And if the first end point and the second end point are in other position relations with the lane line, determining that the target vehicle is driving out or into the emergency lane. For example, if the first end point is on the right side of the lane line of the emergency lane and the second end point is on the left side of the lane line of the emergency lane, it is determined that the target vehicle is driving into the emergency lane. For another example, if the first end point is on the left side of the lane line of the emergency lane and the second end point is on the right side of the lane line of the emergency lane, it is determined that the target vehicle is exiting the emergency lane.
In some embodiments, the emergency lane occupancy behavior detection method may be applied to the above-described precondition 2 and precondition 3. In this case, step S42 may specifically include: if the first endpoint and the second endpoint are both located on the right side of the lane line of the emergency lane, determining that the target vehicle does not occupy the emergency lane; and if the first endpoint and the second endpoint are both positioned on the left side of the lane line of the emergency lane, determining that the target vehicle occupies the emergency lane. And if the first end point and the second end point are in other position relations with the lane line, determining that the target vehicle is driving out or into the emergency lane. For example, if the first end point is on the right side of the lane line of the emergency lane and the second end point is on the left side of the lane line of the emergency lane, it is determined that the target vehicle is driving into the emergency lane. For another example, if the first end point is on the left side of the lane line of the emergency lane and the second end point is on the right side of the lane line of the emergency lane, it is determined that the target vehicle is exiting the emergency lane.
Therefore, with the change of the setting position of the emergency lane, when the position relation between the first endpoint, the second endpoint and the lane line of the emergency lane can be adaptively adjusted, the target vehicle is determined to occupy the emergency lane. For example, in a case where the emergency lane is disposed on the right side in the driving direction, when both the first endpoint and the second endpoint are located on the right side of the lane line of the emergency lane, it is determined that the target vehicle occupies the emergency lane. And under the condition that the emergency lane is arranged on the left side of the driving direction, when the first endpoint and the second endpoint are both positioned on the left side of the lane line of the emergency lane, determining that the target vehicle occupies the emergency lane.
From the above, it can be seen that, by detecting the position relationship between the first endpoint and the second endpoint and the lane line of the emergency lane, whether the target vehicle is in the emergency lane area can be determined, and further whether the target vehicle occupies the emergency lane can be determined.
In the embodiment of the present application, the first coordinate is represented as (x1, y1), the second coordinate is represented as (x2, y2), and the curve data is represented as y ═ f (x). Where f (x) is a polynomial, e.g., it may be a second order polynomial (e.g., ax)2+ bx + c), third order polynomial (e.g., ax)3+bx2+ cx + d), or a fourth order polynomial (e.g. ax)4+bx3+cx2+dx+e)。
In some embodiments, the positional relationship includes the first endpoint and the second endpoint both being to the right of the lane line (of the emergency lane). In this case, step S41 may specifically include: detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint; when y1> f (x1), and y2> f (x2) are detected, it is determined that the first endpoint and the second endpoint are both to the right of the lane line.
Correspondingly, if the preconditions 1 and 4 are mentioned above, in this case, step S42 may specifically include: when y1> f (x1) and y2> f (x2) are detected, that is, when the positional relationship is that both the first end point and the second end point are on the right side of the lane line, it is determined that the target vehicle occupies the emergency lane.
Correspondingly, if the preconditions 2 and 3 are mentioned above, in this case, the step S42 may specifically include: when y1> f (x1) and y2> f (x2) are detected, that is, when the positional relationship is that both the first end point and the second end point are on the right side of the lane line, it is determined that the target vehicle does not occupy the emergency lane.
For example, the curve data is expressed as y ═ f (x) and specifically as a third order polynomial: y is ax3+bx2+ cx + d. Substituting x into x1 into y into ax3+bx2+ cx + d, i.e. f (x1) ═ a (x1)3+b(x1)2+ c (x1) + d. Substituting x into x2 into y into ax3+bx2+ cx + d, i.e. f (x2) ═ a (x2)3+b(x2)2+ c (x2) + d. Thus, the magnitude relationship of y1 with f (x1) and the magnitude relationship of y2 with f (x2) can be compared.
As can be seen from the above, by comparing the magnitude relationship between the ordinate y1 of the first endpoint and the curve data f (x1), and the magnitude relationship between the ordinate y2 of the second endpoint and the curve data f (x 2); when y1> f (x1), and y2> f (x2), it is determined that the first endpoint and the second endpoint are both on the right side of the lane line, it may be determined whether the target vehicle is within the emergency lane area, and it may be accurately determined whether the target vehicle occupies the emergency lane.
In some embodiments, the positional relationship includes the first endpoint and the second endpoint both being to the left of the lane line (of the emergency lane). In this case, step S41 may specifically include: detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint; when y1< f (x1), and y2< f (x2) are detected, it is determined that the first endpoint and the second endpoint are both to the left of the lane line.
Correspondingly, if the preconditions 1 and 4 are mentioned above, in this case, step S42 may specifically include: when y1< f (x1) and y2< f (x2) are detected, that is, when the positional relationship is that the first endpoint and the second endpoint are both on the left side of the lane line, it is determined that the target vehicle does not occupy the emergency lane.
Correspondingly, if the preconditions 2 and 3 are mentioned above, in this case, the step S42 may specifically include: when y1< f (x1) and y2< f (x2) are detected, that is, when the positional relationship is that the first endpoint and the second endpoint are both on the left side of the lane line, it is determined that the target vehicle occupies the emergency lane.
For reference, the above description and examples can be referred to for detecting the magnitude relationship between the ordinate y1 and f (x1) of the first endpoint and the magnitude relationship between the ordinate y2 and f (x2) of the second endpoint, which are not described herein again.
As can be seen from the above, by comparing the magnitude relationship between the ordinate y1 of the first endpoint and the curve data f (x1), and the magnitude relationship between the ordinate y2 of the second endpoint and the curve data f (x 2); when y1< f (x1), and y2< f (x2), it is determined that the first endpoint and the second endpoint are both on the left side of the lane line, it may be determined whether the target vehicle is within the emergency lane area, and it may be accurately determined whether the target vehicle occupies the emergency lane.
In some embodiments, the positional relationship further includes other positional relationships (i.e., the above positional relationships 2, 3, 4, 5, 6, and 7). At this time, the emergency lane occupancy behavior detection method may further include: detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint. When y1 ≧ f (x1) and y2 ≧ f (x2) are detected, it is determined that the target vehicle is driving into the emergency lane or is driving out of the emergency lane.
Wherein when y1< f (x1) and y2> f (x2) are detected, or when y1> f (x1) and y2< f (x2) are detected, it may be determined that the first endpoint and the second endpoint are respectively at the left side and the right side of the lane line. When y1 ═ f (x1) and y2 ═ f (x2) are detected, it may be determined that the first endpoint and the second endpoint are both on the lane line.
For reference, the above description and examples can be referred to for detecting the magnitude relationship between the ordinate y1 and f (x1) of the first endpoint and the magnitude relationship between the ordinate y2 and f (x2) of the second endpoint, which are not described herein again.
From the above, when it is detected that y1 is not less than f (x1) and y2 is not less than f (x2), it can be determined that other positional relationships (i.e., the positional relationships 2, 3, 4, 5, 6, and 7) exist between the first endpoint, the second endpoint, and the lane line, and further it is determined that the target vehicle is driving into the emergency lane or driving out of the emergency lane, so that more comprehensive and accurate information is provided for detecting the emergency lane occupancy behavior. And when the target vehicle is detected to be driving into the emergency lane or driving out of the emergency lane, the target vehicle is further determined to occupy the emergency lane.
In order to facilitate timely processing of emergency lane occupation behaviors by traffic police so as to ensure correct use of public resources, in the embodiment of the application, when the emergency lane occupation behaviors are detected, alarm processing can be further performed according to information such as specific road positions and specific time of the emergency lane occupation behaviors. That is to say, the emergency lane occupation behavior detection method of the embodiment of the present application may further include: when the target vehicle is determined to occupy the emergency lane, vehicle information of the target vehicle is acquired; and carrying out traffic alarm processing according to the vehicle information.
The vehicle information may include at least one of license plate number information of the target vehicle, a specific position where the target vehicle occupies the emergency lane, and a specific time when the target vehicle occupies the emergency lane.
Specifically, when it is detected that the target vehicle occupies the emergency lane, that is, when the emergency lane occupancy behavior is detected, further license plate number recognition processing may be performed based on the state image to recognize license plate number information of the target vehicle. The license plate number of the target vehicle can be newly identified based on the existing license plate number identification algorithm, and details are not repeated here.
Further, a GPS positioning device is mounted on the specific vehicle. The GPS positioning device is used for acquiring the GPS coordinate position of the specific vehicle so as to acquire the position information of the specific vehicle; and then the specific position of the target vehicle occupying the emergency lane can be determined.
The camera of the specific vehicle can also be used for recording the acquisition time of the state image, so that the specific time of the target vehicle occupying the emergency lane can be determined based on the acquisition time of the state image.
And finally, sending the information of the license plate number of the target vehicle, the specific position of the emergency lane occupied by the target vehicle, the specific time of the emergency lane occupied by the target vehicle and the like to corresponding traffic processing personnel to realize automatic traffic alarm processing. So that the traffic police can directly learn the specific road position and the specific time of the emergency lane occupation behavior and timely process the emergency lane occupation behavior.
From the above, it can be seen that automatic traffic alarm processing is realized by sending information such as license plate number information of the target vehicle with the emergency lane occupation behavior, the specific position of the target vehicle occupying the emergency lane, the specific time of the target vehicle occupying the emergency lane and the like to the corresponding traffic processing personnel. The traffic police can timely process the emergency lane occupation behavior, the emergency lane occupation behavior is timely supervised, and the correct use of public resources is guaranteed to a certain extent.
In order to better implement the method for detecting emergency lane occupancy behavior in the embodiment of the present application, on the basis of the method for detecting emergency lane occupancy behavior, an apparatus for detecting emergency lane occupancy behavior is further provided in the embodiment of the present application, as shown in fig. 8, which is a schematic structural diagram of an embodiment of the apparatus for detecting emergency lane occupancy behavior in the embodiment of the present application, and the apparatus 800 for detecting emergency lane occupancy behavior includes:
a first obtaining unit 801, configured to obtain a state image of a target road, where the target road is a road provided with an emergency lane;
the processing unit 802 is configured to perform fitting processing according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of a target vehicle in contact with the road surface of the target road, wherein the target vehicle is a vehicle on the target road;
a second obtaining unit 803, configured to obtain a first coordinate and a second coordinate of a first endpoint and a second endpoint in a preset reference coordinate system, respectively, and curve data of the lane line in the reference coordinate system, where the first endpoint and the second endpoint are two endpoints of the target line segment;
a determining unit 804, configured to determine whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate, and the curve data.
In some embodiments of the present application, the determining unit 804 is further specifically configured to:
determining the position relation between the first end point and the second end point and the lane line according to the first coordinate, the second coordinate and the curve data;
and performing discrimination processing according to the position relation to determine whether the target vehicle occupies an emergency lane.
In some embodiments of the present application, the position relationship includes that the first endpoint and the second endpoint are both located on the right side of the lane line, the first coordinate is (x1, y1), the second coordinate is (x2, y2), the curve data is y ═ f (x), and the determining unit 804 is further specifically configured to:
detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint;
determining that the first endpoint and the second endpoint are both to the right of the lane line when y1> f (x1), and y2> f (x2) are detected;
and when the position relation is that the first endpoint and the second endpoint are both positioned on the right side of the lane line, determining that the target vehicle occupies an emergency lane.
In some embodiments of the present application, the position relationship further includes that the first endpoint and the second endpoint are both located on the left side of the lane line, the first coordinate is (x1, y1), the second coordinate is (x2, y2), the curve data is y ═ f (x), and the determining unit 804 is further specifically configured to:
detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint;
determining that the first endpoint and the second endpoint are both to the left of the lane line when y1< f (x1), and y2< f (x2) are detected;
and when the position relation is that the first endpoint and the second endpoint are both positioned on the left side of the lane line, determining that the target vehicle does not occupy an emergency lane.
In some embodiments of the present application, the position relationship further includes that the first endpoint and the second endpoint are respectively located on the left side and the right side of the lane line, the first coordinate is (x1, y1), the second coordinate is (x2, y2), the curve data is y ═ f (x), and the determining unit 804 is specifically further configured to:
detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint;
determining that the first endpoint and the second endpoint are respectively at the left side and the right side of the lane line when y1< f (x1), and y2> f (x2) are detected; or
Determining that the first endpoint and the second endpoint are respectively at the left side and the right side of the lane line when y1> f (x1), and y2< f (x2) are detected;
and when the position relation is that the first endpoint and the second endpoint are respectively positioned on the left side and the right side of the lane line, determining that the target vehicle is driving into an emergency lane or driving out of the emergency lane.
In some embodiments of the present application, the processing unit 802 is further specifically configured to:
performing feature extraction processing according to the state image to obtain image feature information of the state image;
performing prediction processing according to the characteristic information to obtain the position information of the target vehicle;
classifying and processing according to the position information to obtain a position label of the target vehicle;
and when the position label is the same as a preset position label, performing regression processing according to the position information to obtain the target line segment.
In some embodiments of the present application, the emergency lane occupancy behavior detection apparatus 800 further comprises an alarm unit (not shown in the figures), which is specifically configured to:
when the target vehicle is determined to occupy the emergency lane, vehicle information of the target vehicle is acquired;
and carrying out traffic alarm processing according to the vehicle information.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
Since the emergency lane occupancy behavior detection apparatus may execute the steps in the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7, the beneficial effects that can be achieved by the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7 can be achieved, for details, see the foregoing description, and are not repeated herein.
In addition, in order to better implement the emergency lane occupancy behavior detection method in the embodiment of the present application, based on the emergency lane occupancy behavior detection method, an electronic device is further provided in the embodiment of the present application, referring to fig. 9, fig. 9 shows a schematic structural diagram of the electronic device in the embodiment of the present application, specifically, the electronic device provided in the embodiment of the present application includes a processor 901, and when the processor 901 is used to execute a computer program stored in a memory 902, each step of the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7 is implemented; alternatively, the processor 901 is configured to implement the functions of the units in the corresponding embodiment of fig. 8 when executing the computer program stored in the memory 902.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 902 and executed by the processor 901 to implement embodiments of the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The electronic device may include, but is not limited to, a processor 901, a memory 902. Those skilled in the art will appreciate that the illustration is merely an example of an electronic device, and does not constitute a limitation of the electronic device, and may include more or less components than those illustrated, or combine some components, or different components, for example, the electronic device may further include an input output device, a network access device, a bus, etc., and the processor 901, the memory 902, the input output device, the network access device, etc., are connected via the bus.
The Processor 901 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center for the electronic device and the various interfaces and lines connecting the various parts of the overall electronic device.
The memory 902 may be used for storing computer programs and/or modules, and the processor 901 may implement various functions of the computer apparatus by operating or executing the computer programs and/or modules stored in the memory 902 and calling data stored in the memory 902. The memory 902 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the electronic device, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the emergency lane occupancy behavior detection apparatus, the electronic device and the corresponding units thereof described above may refer to the description of the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7, and are not described herein again in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
For this reason, an embodiment of the present application provides a computer-readable storage medium, where a plurality of instructions are stored, where the instructions can be loaded by a processor to execute steps in the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7 in the present application, and specific operations may refer to descriptions of the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7, which are not described herein again.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7 in the present application, the beneficial effects that can be achieved by the emergency lane occupancy behavior detection method in any embodiment corresponding to fig. 1 to 7 in the present application can be achieved, which are described in detail in the foregoing description and will not be repeated herein.
The method, the device, the electronic device and the computer-readable storage medium for detecting the emergency lane occupation behavior provided by the embodiment of the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An emergency lane occupancy behavior detection method, characterized in that the method comprises:
acquiring a state image of a target road, wherein the target road is a road provided with an emergency lane;
fitting according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of a target vehicle in contact with the road surface of the target road, wherein the target vehicle is a vehicle on the target road;
acquiring a first coordinate and a second coordinate of a first end point and a second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system, wherein the first end point and the second end point are two end points of the target line segment;
and determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate and the curve data.
2. The emergency lane occupancy behavior detection method according to claim 1, wherein the determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate, and the curve data includes:
determining the position relation between the first end point and the second end point and the lane line according to the first coordinate, the second coordinate and the curve data;
and performing discrimination processing according to the position relation to determine whether the target vehicle occupies an emergency lane.
3. The emergency lane occupancy behavior detection method according to claim 2, wherein the position relationship includes that the first endpoint and the second endpoint are both located on the right side of the lane line, the first coordinate is (x1, y1), the second coordinate is (x2, y2), and the curve data is y ═ f (x), and the determining the position relationship between the first endpoint and the second endpoint and the lane line respectively according to the first coordinate, the second coordinate, and the curve data includes:
detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint;
determining that the first endpoint and the second endpoint are both to the right of the lane line when y1> f (x1), and y2> f (x2) are detected;
the judging and processing according to the position relation to determine whether the target vehicle occupies an emergency lane includes:
and when the position relation is that the first endpoint and the second endpoint are both positioned on the right side of the lane line, determining that the target vehicle occupies an emergency lane.
4. The emergency lane occupancy behavior detection method according to claim 2, wherein the position relationship further includes that the first endpoint and the second endpoint are both located on the left side of the lane line, the first coordinate is (x1, y1), the second coordinate is (x2, y2), the curve data is y ═ f (x), and the determining the position relationship between the first endpoint and the second endpoint and the lane line respectively according to the first coordinate, the second coordinate, and the curve data includes:
detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint;
determining that the first endpoint and the second endpoint are both to the left of the lane line when y1< f (x1), and y2< f (x2) are detected;
the judging and processing according to the position relation to determine whether the target vehicle occupies an emergency lane includes:
and when the position relation is that the first endpoint and the second endpoint are both positioned on the left side of the lane line, determining that the target vehicle does not occupy an emergency lane.
5. The emergency lane occupancy behavior detection method according to claim 2, wherein the positional relationship further includes that the first end point and the second end point are respectively on left and right sides of the lane line, the first coordinate is (x1, y1), the second coordinate is (x2, y2), the curve data is y ═ f (x), the method further includes:
detecting a magnitude relationship between the ordinates y1 and f (x1) of the first endpoint and detecting a magnitude relationship between the ordinates y2 and f (x2) of the second endpoint;
determining that the first endpoint and the second endpoint are respectively at the left side and the right side of the lane line when y1< f (x1), and y2> f (x2) are detected; or
Determining that the first endpoint and the second endpoint are respectively at the left side and the right side of the lane line when y1> f (x1), and y2< f (x2) are detected;
and when the position relation is that the first endpoint and the second endpoint are respectively positioned on the left side and the right side of the lane line, determining that the target vehicle is driving into an emergency lane or driving out of the emergency lane.
6. The emergency lane occupancy behavior detection method according to any one of claims 1 to 5, wherein the performing prediction processing according to the state image to obtain a target line segment where a target vehicle contacts the target road surface includes:
performing feature extraction processing according to the state image to obtain image feature information of the state image;
performing prediction processing according to the characteristic information to obtain the position information of the target vehicle;
classifying and processing according to the position information to obtain a position label of the target vehicle;
and when the position label is the same as a preset position label, performing regression processing according to the position information to obtain the target line segment.
7. The emergency lane occupancy behavior detection method according to any one of claims 1 to 5, characterized in that the method further comprises:
when the target vehicle is determined to occupy the emergency lane, vehicle information of the target vehicle is acquired;
and carrying out traffic alarm processing according to the vehicle information.
8. The utility model provides an emergent lane occupation behavior detection device which characterized in that, emergent lane occupation behavior detection device includes:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a state image of a target road, and the target road is a road provided with an emergency lane;
the processing unit is used for performing fitting processing according to the state image to obtain a lane line of the emergency lane; performing prediction processing according to the state image to obtain a target line segment of a target vehicle in contact with the road surface of the target road, wherein the target vehicle is a vehicle on the target road;
the second acquisition unit is used for acquiring a first coordinate and a second coordinate of a first end point and a second end point respectively in a preset reference coordinate system, and curve data of the lane line in the reference coordinate system, wherein the first end point and the second end point are two end points of the target line segment;
and the judging unit is used for determining whether the target vehicle occupies an emergency lane according to the first coordinate, the second coordinate and the curve data.
9. An electronic device, characterized in that it comprises a processor and a memory, in which a computer program is stored, which when called by the processor executes the emergency lane occupancy behavior detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which is loaded by a processor to perform the steps in the emergency lane occupancy behavior detection method of any one of claims 1 to 7.
CN202011007778.7A 2020-09-23 2020-09-23 Emergency lane occupation behavior detection method and device, electronic equipment and storage medium Active CN114255597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011007778.7A CN114255597B (en) 2020-09-23 2020-09-23 Emergency lane occupation behavior detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011007778.7A CN114255597B (en) 2020-09-23 2020-09-23 Emergency lane occupation behavior detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114255597A true CN114255597A (en) 2022-03-29
CN114255597B CN114255597B (en) 2023-07-07

Family

ID=80788832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011007778.7A Active CN114255597B (en) 2020-09-23 2020-09-23 Emergency lane occupation behavior detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114255597B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279968A (en) * 2015-11-17 2016-01-27 陕西科技大学 Discrimination system and method for the illegal emergency lane occupancy behavior of highway motor vehicle
CN105702048A (en) * 2016-03-23 2016-06-22 武汉理工大学 Automobile-data-recorder-based illegal lane occupation identification system and method for automobile on highway
CN105741559A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Emergency vehicle lane illegal occupation detection method based on lane line model
CN106652468A (en) * 2016-12-09 2017-05-10 武汉极目智能技术有限公司 Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN107705552A (en) * 2016-08-08 2018-02-16 杭州海康威视数字技术股份有限公司 A kind of Emergency Vehicle Lane takes behavioral value method, apparatus and system
US20180293684A1 (en) * 2016-09-07 2018-10-11 Southeast University Supervision and penalty method and system for expressway emergency lane occupancy
CN108932852A (en) * 2018-06-22 2018-12-04 安徽科力信息产业有限责任公司 A kind of illegal method and device for occupying Emergency Vehicle Lane behavior of record motor vehicle
CN110782673A (en) * 2019-10-26 2020-02-11 江苏看见云软件科技有限公司 Vehicle violation identification and detection system based on unmanned aerial vehicle shooting cloud computing
CN111382704A (en) * 2020-03-10 2020-07-07 北京以萨技术股份有限公司 Vehicle line-pressing violation judgment method and device based on deep learning and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279968A (en) * 2015-11-17 2016-01-27 陕西科技大学 Discrimination system and method for the illegal emergency lane occupancy behavior of highway motor vehicle
CN105741559A (en) * 2016-02-03 2016-07-06 安徽清新互联信息科技有限公司 Emergency vehicle lane illegal occupation detection method based on lane line model
CN105702048A (en) * 2016-03-23 2016-06-22 武汉理工大学 Automobile-data-recorder-based illegal lane occupation identification system and method for automobile on highway
CN107705552A (en) * 2016-08-08 2018-02-16 杭州海康威视数字技术股份有限公司 A kind of Emergency Vehicle Lane takes behavioral value method, apparatus and system
US20180293684A1 (en) * 2016-09-07 2018-10-11 Southeast University Supervision and penalty method and system for expressway emergency lane occupancy
CN106652468A (en) * 2016-12-09 2017-05-10 武汉极目智能技术有限公司 Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN108932852A (en) * 2018-06-22 2018-12-04 安徽科力信息产业有限责任公司 A kind of illegal method and device for occupying Emergency Vehicle Lane behavior of record motor vehicle
CN110782673A (en) * 2019-10-26 2020-02-11 江苏看见云软件科技有限公司 Vehicle violation identification and detection system based on unmanned aerial vehicle shooting cloud computing
CN111382704A (en) * 2020-03-10 2020-07-07 北京以萨技术股份有限公司 Vehicle line-pressing violation judgment method and device based on deep learning and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕正荣: "智能交通系统中视频处理关键技术研究" *
吕正荣: "智能交通系统中视频处理关键技术研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *

Also Published As

Publication number Publication date
CN114255597B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Marzougui et al. A lane tracking method based on progressive probabilistic Hough transform
KR20220000977A (en) Apparatus and method for providing guidance information using crosswalk recognition result
WO2020048265A1 (en) Methods and apparatuses for multi-level target classification and traffic sign detection, device and medium
WO2015089867A1 (en) Traffic violation detection method
Lee et al. Available parking slot recognition based on slot context analysis
Ding et al. Fast lane detection based on bird’s eye view and improved random sample consensus algorithm
CN109766867B (en) Vehicle running state determination method and device, computer equipment and storage medium
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN111583660B (en) Vehicle steering behavior detection method, device, equipment and storage medium
KR102306789B1 (en) License Plate Recognition Method and Apparatus for roads
Omidi et al. An embedded deep learning-based package for traffic law enforcement
CN112862856A (en) Method, device and equipment for identifying illegal vehicle and computer readable storage medium
KR20200133920A (en) Apparatus for recognizing projected information based on ann and method tnereof
US20150269732A1 (en) Obstacle detection device
CN114092902A (en) Method and device for detecting violation behaviors of electric bicycle
CN114141022B (en) Emergency lane occupation behavior detection method and device, electronic equipment and storage medium
CN114255597B (en) Emergency lane occupation behavior detection method and device, electronic equipment and storage medium
CN115909314A (en) License plate shielding detection method, illegal parking detection method and equipment thereof
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
Ge et al. A real-time lane detection algorithm based on intelligent ccd parameters regulation
CN112289040B (en) Method and device for identifying vehicle driving direction and storage medium
CN112686136B (en) Object detection method, device and system
KR102369824B1 (en) License Plate Recognition Method and Apparatus for roads
Castillo et al. Vsion: Vehicle occlusion handling for traffic monitoring
CN113147746A (en) Method and device for detecting ramp parking space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant