CN112712731B - Image processing method, device and system, road side equipment and cloud control platform - Google Patents

Image processing method, device and system, road side equipment and cloud control platform Download PDF

Info

Publication number
CN112712731B
CN112712731B CN202011517415.8A CN202011517415A CN112712731B CN 112712731 B CN112712731 B CN 112712731B CN 202011517415 A CN202011517415 A CN 202011517415A CN 112712731 B CN112712731 B CN 112712731B
Authority
CN
China
Prior art keywords
intersection
endpoint
distance
image
stop line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011517415.8A
Other languages
Chinese (zh)
Other versions
CN112712731A (en
Inventor
刘博�
黄秀林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202011517415.8A priority Critical patent/CN112712731B/en
Publication of CN112712731A publication Critical patent/CN112712731A/en
Application granted granted Critical
Publication of CN112712731B publication Critical patent/CN112712731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, an image processing system, a drive test device, a cloud control platform, a program product, an electronic device and a storage medium, and relates to the technical field of image processing, automatic driving, intelligent transportation and computer vision. The method comprises the following steps: the method comprises the steps of collecting images of each intersection in intersection objects, determining a stop line and a center line of each intersection, calibrating an end point of the stop line of each intersection according to the stop line and the center line of each intersection to obtain a calibrated stop line of each intersection, generating an interested area of the intersection object according to the calibrated stop line of each intersection, sending the interested area to a vehicle, selecting an object to be identified, calibrating the stop line based on the stop line and the center line to enable the calibrated stop line to be highly attached to an actual stop line, and improving the interested accuracy and reliability when the interested area is generated based on the calibrated stop line.

Description

Image processing method, device and system, road side equipment and cloud control platform
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, system, drive test device, cloud control platform, program product, electronic device, and storage medium, which can be used in the technical fields of automatic driving, intelligent transportation, and computer vision.
Background
Image processing is widely used in various fields such as an automatic driving field, and in the automatic driving field, in order to improve intelligence and automation Of vehicle driving, a Region Of Interest (ROI) may be set in advance so as to acquire and process an image within the Region Of Interest.
For example, the determination of the region of interest at the intersection is generally performed by using cameras that are disposed at the intersection and face each other in the related art.
However, the image capturing range of the cameras is limited by parameters such as the angle of view, so that the reliability of the region of interest determined by the cameras facing each other is low.
Disclosure of Invention
The application provides an image processing method, an image processing device, an image processing system, a drive test device, a cloud control platform, a program product, an electronic device and a storage medium for improving accuracy of a region of interest.
According to a first aspect of the present application, there is provided an image processing method comprising:
acquiring an image of each intersection in an intersection object, and determining a stop line and a center line of each intersection;
calibrating the end point of the stop line of each intersection according to the stop line and the center line of each intersection to obtain the calibrated stop line of each intersection; generating an interested area of the intersection object according to the calibrated stop line of each intersection;
and sending the region of interest to a vehicle, wherein the region of interest is used for selecting an object to be identified.
In the embodiment, the stop line of each intersection is calibrated for each intersection based on the stop line and the center line of each intersection, and the region of interest is generated based on the calibrated stop lines, so that the accuracy and reliability of the region of interest can be improved.
According to a second aspect of the present application, there is provided an image processing apparatus comprising:
the system comprises an acquisition module, a judgment module and a control module, wherein the acquisition module is used for acquiring an image of each intersection in an intersection object and determining a stop line and a center line of each intersection;
the calibration module is used for calibrating the end point of the stop line of each intersection according to the stop line and the center line of each intersection to obtain the calibrated stop line of each intersection;
the generating module is used for generating an interested area of the intersection object according to the calibrated stop line of each intersection;
and the communication module is used for sending the region of interest to a vehicle, and the region of interest is used for selecting the object to be identified.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present application, there is provided an image processing system comprising: an apparatus as in the second aspect, or an electronic device as in the third aspect; further comprising:
and the image acquisition device is arranged at each intersection in the intersection object and used for acquiring the image of the opposite intersection and sending the acquired image to the device in the second aspect or the electronic equipment in the third aspect.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the first aspect.
According to a fifth aspect of the present application, there is provided a drive test apparatus comprising the electronic apparatus of the third aspect described above.
According to a sixth aspect of the present application, there is provided a cloud control platform including the electronic device according to the third aspect.
According to a seventh aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the first embodiment.
According to the present application: the method comprises the steps of collecting images of each intersection in intersection objects, determining a stop line and a center line of each intersection, calibrating an end point of the stop line of each intersection according to the stop line and the center line of each intersection to obtain a calibrated stop line of each intersection, generating an interested area of the intersection object according to the calibrated stop line of each intersection, sending the interested area to a vehicle, and using the interested area to select an object to be identified, and when the vehicle identifies the object to be identified to obtain an identification result and controls the vehicle to run based on the identification result, the technical effects of improving the safety and reliability of vehicle running can be achieved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present application;
FIG. 2 is a schematic diagram according to a second embodiment of the present application;
FIG. 3 is a schematic illustration according to a third embodiment of the present application;
fig. 4 is a schematic diagram of the principle of stop-line calibration according to the present embodiment;
FIG. 5 is a schematic illustration of a fourth embodiment according to the present application;
FIG. 6 is a schematic illustration according to a fifth embodiment of the present application;
fig. 7 is a schematic diagram according to a sixth embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the driving process of the vehicle, the obstacle can be identified based on an image identification technology, the traffic light can also be identified and the like, the image identification technology is realized based on an interested area, the interested area is an image area selected from images, the area is a key point concerned by image analysis, and the interested area is used for selecting an object to be identified, such as the obstacle, the traffic light and the like, so that the processing time can be reduced, and the accuracy of an identification result can be improved.
The image processing method can be applied to an application scene of vehicle running.
Fig. 1 is a schematic diagram according to a first embodiment of the present application, and as shown in fig. 1, an application scenario of an image processing method according to the embodiment of the present application may include:
an intersection object (i.e., an intersection as shown in fig. 1) includes four intersections, such as a first intersection, a second intersection, a third intersection, and a fourth intersection as shown in fig. 1.
Each intersection is provided with: a traffic light 101, a camera 102, a stop line 103, and a center line 104.
Each camera 102 may acquire an opposite image and send the acquired image to a server (which may be a local server, a cloud server, or a road side unit, not shown in the figure).
The server may determine the region of interest based on the images sent by each camera 102. And after determining the region of interest, the server may send the region of interest to each vehicle, so that when the vehicle travels to the intersection, an object to be identified, such as an obstacle (other vehicle), a traffic light, and the like, is selected based on the region of interest.
In the related art, an interested area is generally determined by images acquired by two cameras, and the two cameras are cameras arranged at intersections facing each other.
For example, in conjunction with the application scenario shown in fig. 1, the camera 102 disposed at the first intersection may acquire an image of a third intersection, and the camera disposed at the third intersection may acquire an image of the first intersection. The server may determine the region of interest based on the image of the first intersection (captured by the camera set up for the third intersection) and the image of the third intersection (captured by the camera set up for the first intersection).
However, the image capturing range of the cameras is limited by parameters such as the angle of view, so that the reliability of the region of interest determined by the cameras facing each other is low.
For example, due to the limitation of parameters such as the angle of view, the accuracy of the image of the first intersection acquired by the camera arranged at the third intersection is low, and if the image of the first intersection cannot be completely acquired, the server causes a problem that the accuracy of the determined region of interest is low, and if the region of interest determined by the server is relatively small.
The inventor of the application obtains the inventive concept of the application through creative work: and aiming at the image of each intersection, identifying the image of each intersection to obtain a corresponding stop line and a corresponding center line, calibrating the stop line of each intersection based on the stop line and the center line of each intersection, and generating an interested area based on the calibrated stop line.
Based on the inventive concept, the application provides an image processing method, an image processing device, an image processing system, a drive test device, a cloud control platform, a program product, an electronic device and a storage medium, which are applied to automatic driving, intelligent transportation and computer vision in image processing, so as to achieve the technical effect of improving the reliability and accuracy of image processing.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic diagram of a second embodiment of the present application, and as shown in fig. 2, the image processing method provided in this embodiment includes:
s201: images of each intersection in the intersection objects are collected, and a stop line and a center line of each intersection are determined.
For example, the execution subject of the embodiment may be an image processing apparatus, and the image processing apparatus may specifically be a server (including a cloud server and a local server), a computer, a terminal device, a processor, a chip, and the like.
For example, when the image processing method of the present embodiment is applied to an application scene as shown in fig. 1, the image processing apparatus may be a server, and the server may be connected to each camera, and is configured to acquire an image captured by each camera and determine a stop line and a center line of each intersection based on the image of each intersection.
In one example, the intersection object may be an intersection, such as the intersection shown in fig. 1.
In another example, the intersection object may also be a T-junction.
That is, the application scenario shown in fig. 1 is only one application scenario to which the image processing method of the present embodiment can be applied, and cannot be understood as a limitation on the application scenario of the image processing method of the present embodiment.
Now, referring to the application scenario shown in fig. 1, taking the image processing apparatus as a server as an example, the steps are described as follows:
the camera arranged at the first intersection can acquire opposite images, such as images including a third intersection, and in order to distinguish the images acquired by the cameras, the opposite images acquired by the camera arranged at the first intersection are called first images; similarly, the camera arranged at the second intersection can also acquire an opposite image, such as an image including a fourth intersection, which is called a second image; similarly, the camera arranged at the third intersection can also acquire an opposite image, such as an image including the first intersection, which is called a third image; similarly, the camera arranged at the fourth intersection can also acquire an opposite image, such as an image including the second intersection, which is called a fourth image.
The camera set at the first intersection transmits a first image to the server, the camera set at the second intersection transmits a second image to the server, the camera set at the third intersection transmits a third image to the server, and the camera set at the fourth intersection transmits a fourth image to the server.
The server determines a stop line and a center line of a third intersection based on the first image, determines a stop line and a center line of a fourth intersection based on the second image, determines a stop line and a center line of the first intersection based on the third image, and determines a stop line and a center line of the second intersection based on the fourth image.
S202: and calibrating the end points of the stop line of each intersection according to the stop line and the center line of each intersection to obtain the calibrated stop line of each intersection.
With reference to the application scenario shown in fig. 1 and the above example, the server calibrates the end point of the stop line of the first intersection according to the stop line and the center line of the first intersection to obtain a calibrated stop line of the first intersection, and so on to obtain a calibrated stop line of the second intersection, a calibrated stop line of the third intersection, and a calibrated stop line of the fourth intersection, respectively.
S203: and generating an interested area of the intersection object according to the calibrated stop line of each intersection.
In connection with the application scenario shown in fig. 1 and the above example, the server generates the region of interest of the intersection according to the calibrated stop line of the first intersection, the calibrated stop line of the second intersection, the calibrated stop line of the third intersection, and the calibrated stop line of the fourth intersection.
It should be noted that, in this embodiment, the end point of the stop line is calibrated, and the region of interest is generated based on the calibrated stop line, and compared with the region of interest generated based on the image acquired by the cameras that are objects of each other in the related art, due to the limitation of parameters such as the field angle, the accuracy of the region of interest is relatively low, and the technical effects of accuracy and reliability of the region of interest are improved.
S204: and sending the region of interest to the vehicle, wherein the region of interest is used for selecting the object to be identified.
With reference to the application scenario shown in fig. 1 and the above example, after determining the region of interest, the server may send the region of interest to the vehicle, during the driving of the vehicle, the object to be identified, such as an obstacle and a traffic light, may be selected based on the region of interest, and the object to be identified is identified, so as to obtain an identification result, such as an identification result of the obstacle and an identification result of the traffic light, and the driving of the vehicle may be controlled based on the identification result, such as controlling the vehicle to decelerate based on the identification result of the obstacle, or controlling the vehicle to brake based on the identification result of the traffic light, and so on.
Based on the above analysis, the present embodiment provides an image processing method, including: acquiring an image of each intersection in an intersection object, determining a stop line and a center line of each intersection, calibrating an end point of the stop line of each intersection according to the stop line and the center line of each intersection to obtain a calibrated stop line of each intersection, generating an interested region of the intersection object according to the calibrated stop line of each intersection, sending the interested region to a vehicle, selecting an object to be identified in the interested region, calibrating the stop line based on the stop line and the center line for each intersection to ensure that the calibrated stop line is highly attached to an actual stop line, improving the interested accuracy and reliability when generating the interested region based on the calibrated stop line, and improving the comprehensiveness and integrity of the object to be identified when the vehicle selects the object to be identified based on the interested region, and when the vehicle identifies the object to be identified to obtain an identification result and controls the vehicle to run based on the identification result, the technical effects of improving the safety and reliability of vehicle running can be achieved.
Fig. 3 is a schematic diagram of a third embodiment of the present application, and as shown in fig. 3, the image processing method provided in this embodiment includes:
s301: images of each intersection in the intersection objects are collected, and a stop line and a center line of each intersection are determined.
For example, the description about S301 may refer to S201, and is not described herein again.
In some embodiments, the image processing device may be implemented based on an image recognition technology, and may specifically determine the stop line and the center line of each intersection based on a way of lane line detection.
For example, a sample image including a lane line may be acquired, the base network model may be trained based on the sample image, a lane line detection model for detecting the lane line may be generated, and each image may be detected based on the lane line detection model, so as to obtain a stop line and a center line corresponding to each image.
Illustratively, there may be no centerline on the lane, and thus, in some embodiments, the server may determine a segmentation line for the lane based on image recognition techniques, determine a centerline based on the segmentation line, and so on.
S302: for each intersection, position information of an intersection between the center line and the stop line is determined, and position information of an end point of the stop line is determined.
In connection with the application scenario shown in fig. 1 and the above example, for the first intersection, there is an intersection (hereinafter referred to as a first intersection) between the centerline and the stop-line, and the server determines the position information of the first intersection; the stop line has two end points, and the server determines the position information of the two end points respectively.
For the description of the position information of the intersection point between the center line of the other intersection (i.e., the second intersection, the third intersection, and the fourth intersection) and the stop line, and the position information of the end point of the stop line, reference may be made to the description for the first intersection, which is not listed here.
Similarly, in this embodiment, the position information of the intersection between each center line and each stop line and the position information of the end point of each stop line may also be determined by the lane line identification technique.
S303: the end points of the stop-line are calibrated based on the position information of the intersection points and the position information of the end points of the stop-line.
In connection with the application scenario as shown in fig. 1 and the above example, for the first intersection, the server calibrates one end point of the stop-line based on the position information of the first intersection point and the position information of the two end points of the stop-line.
It should be noted that, in the present embodiment, by calibrating the end point of the stop line according to the position information of the intersection point and the position information of the end point of the stop line, since the center line is generally located in the middle of the lane, relatively speaking, the possibility of being influenced by the angle of view is relatively small, even if the center line is included in the image due to the influence of the angle of view and the like, and the end point of the stop line may not be included in the image, by calibrating the end point of the stop line, the problem that the stop line cannot be completely displayed in the image due to the influence of the angle of view and the like, and the reliability of the region of interest determined based on the partial stop line is relatively low can be avoided, thereby improving the technical effects of determining the accuracy and reliability of the region of interest.
In some embodiments, the distance between the end points of the calibrated stop-line for each intersection is greater than the distance between the end points of the pre-calibrated stop-line for each intersection.
In this embodiment, for any stop line, the distance between the end points of the calibrated stop line is greater than the distance between the end points of the stop line before calibration, so that the range of the determined region of interest is greater than the region of interest determined based on the method in the related art, thereby improving the technical effects of comprehensiveness and integrity of the region of interest coverage.
In some embodiments, the location information of the intersection includes: image coordinates of the intersection point in an image coordinate system; the position information of the end point of the stop-line comprises: the image coordinates of the endpoint in the image coordinate system, S303, may include the steps of:
step 1: and converting the image coordinates of the intersection point in the image coordinate system into the world coordinates in the world coordinate system, and converting the image coordinates of the end point in the image coordinate system into the world coordinates in the world coordinate system.
It should be noted that, regarding the conversion method between the image coordinate system and the world coordinate system in the present embodiment, reference may be made to the conversion method in the related art, and details thereof are not described herein.
Step 2: the world coordinates of the end point are adjusted based on the world coordinates of the intersection point and the world coordinates of the end point.
In the embodiment, the world coordinates of the end points are adjusted based on the conversion between the image coordinates and the world coordinates, so that the height fit between the adjusted end points and the stop lines of the real scene can be improved, and the technical effects of accuracy and reliability of the determined region of interest are achieved.
In some embodiments, the stop-line has two end points, respectively a first end point and a second end point, and the world coordinates of the end points include: world coordinates of the first endpoint and world coordinates of the second endpoint; step 2 may comprise the following sub-steps:
substep 1: a first distance between the first end point and the intersection point is determined based on the world coordinates of the first end point and the world coordinates of the intersection point, and a second distance between the second end point and the intersection point is determined based on the world coordinates of the second end point and the world coordinates of the intersection point.
Fig. 4 is a schematic diagram of the principle of stop-line calibration according to the present embodiment, and this sub-step is described below in conjunction with fig. 4:
as shown in fig. 4, the intersection of the centreline 104 and the stop-line 103 is marked a, the first end of the stop-line 103 is marked b, the second end of the stop-line 103 is marked c;
the world coordinates of the first end point b are labeled (xb, yb), the world coordinates of the intersection point a are labeled (xa, ya), and the world coordinates of the second end point are labeled c (xc, yc);
since the first end point b, the intersection point a, and the second end point c are three points on the same straight line, the ordinate of the first end point b, the intersection point a, and the second end point c is the same, that is, yb ═ ya ═ yc, then the first distance d1 ═ xb-xa |, between the first end point b and the intersection point a, and the second distance d2 ═ xc-xa |, between the second end point c and the intersection point a.
Substep 2: and determining an endpoint to be adjusted from the first endpoint and the second endpoint according to the first distance and the second distance, and adjusting the endpoint to be adjusted.
It should be noted that, in the present embodiment, by determining the endpoint to be adjusted based on the first distance d1 and the second distance d2, convenience and rapidity in determining the endpoint to be adjusted may be improved, thereby improving the technical effect of efficiency in determining the region of interest.
In some embodiments, determining the endpoint to be adjusted from the first endpoint and the second endpoint may include: if the first distance is smaller than the second distance, determining the first endpoint as an endpoint to be adjusted; and if the first distance is greater than the second distance, determining the second endpoint as the endpoint to be adjusted.
In conjunction with fig. 4 and the above example, the present embodiment can be understood as: determining the size between the first distance d1 and the second distance d2, determining the first endpoint b as the endpoint to be adjusted if the first distance d1 is less than the second distance d2, and determining the second endpoint c as the endpoint to be adjusted if the first distance d1 is greater than the second distance d 2.
It should be noted that, in this embodiment, by determining the size between two distances (i.e., the first distance and the second distance), and determining the end point corresponding to the smaller distance as the end point to be adjusted, the efficiency of determining the end point to be adjusted may be improved, thereby improving the technical effect of determining the efficiency of the region of interest.
In some embodiments, after the server determines the endpoint to be adjusted, the endpoint to be adjusted may be adjusted based on the difference between the two distances.
For example, referring to fig. 4 and the above example, if the first distance d1 is smaller than the second distance d2 and the first endpoint b is the endpoint to be adjusted, the world coordinates of the first endpoint b are adjusted based on the direction away from the intersection point a.
Specifically, the difference d21 between the first distance d2 and the second distance d1, for example, d21 is d2-d1, and the world coordinate of the first endpoint b is adjusted by d21 in a direction away from the intersection a, and the world coordinate of the adjusted first endpoint b (e.g., b' shown in fig. 4) is (xb + d21, yb).
Similarly, if the first distance d1 is greater than the second distance d2 and the second endpoint c is the endpoint to be adjusted, the world coordinate of the second endpoint c is adjusted based on the direction away from the intersection point a.
Specifically, the difference d12 between the second distance d1 and the first distance d2, for example, d12 is d1-d2, and the world coordinate of the second endpoint c is adjusted by d12 in a direction away from the intersection a, where the adjusted world coordinate of the second endpoint c is (xc + d12, yc).
It should be noted that, in this embodiment, the world coordinate of the endpoint to be adjusted is adjusted based on the direction away from the intersection point based on the difference between the first distance and the second distance, so that the distance between the two adjusted endpoints is relatively increased, and the range of the region of interest determined based on the two endpoints is relatively increased, thereby avoiding the problem that the determined region of interest is smaller due to the influence of factors such as the angle of view in the related art, and improving the accuracy and reliability of the determined region of interest.
In addition, in the embodiment, the adjusted world coordinates of the end points are determined based on the distances from the two end points to the intersection point, so that the adjusted world coordinates of the end points can be determined conveniently and quickly, and the technical effect of improving the efficiency of determining the region of interest is achieved.
S304: and constructing a convex polygon according to the calibrated end points of each intersection, and determining the region in the convex polygon as the region of interest.
In connection with the above example and the application scenario shown in fig. 1, this step can be understood as: after the stop lines of the four intersections (the first intersection, the second intersection, the third intersection and the fourth intersection) are calibrated, the calibrated endpoints are sequentially connected to obtain a convex polygon, and the region framed by the convex polygon is the region of interest.
It should be noted that, in this embodiment, the convex polygon is constructed based on the calibrated endpoint, and the region framed by the convex polygon is determined as the region of interest, so that the determined region of interest is relatively larger than the region of interest determined based on the endpoint before calibration, thereby avoiding the influence of factors such as the field angle, and improving the integrity and reliability of the determined region of interest.
S305: and sending the region of interest to the vehicle, wherein the region of interest is used for selecting the information to be identified.
For example, the description about S305 may refer to S204, which is not described again here.
Fig. 5 is a schematic diagram of a fourth embodiment of the present application, and as shown in fig. 5, the present embodiment provides an image processing apparatus 500, including:
the acquisition module 501 is configured to acquire an image of each intersection in an intersection object, and determine a stop line and a center line of each intersection;
a calibration module 502, configured to calibrate an end point of the stop line of each intersection according to the stop line and the center line of each intersection, so as to obtain a calibrated stop line of each intersection;
a generating module 503, configured to generate an interested area of the intersection object according to the calibrated stop line of each intersection;
a communication module 504, configured to send the region of interest to a vehicle, where the region of interest is used to select an object to be identified.
In some embodiments, the calibration module 502 is configured to, for each intersection, determine position information of an intersection between the centerline and the stop-line, determine position information of an end point of the stop-line, and calibrate the end point of the stop-line based on the position information of the intersection and the position information of the end point of the stop-line.
In some embodiments, the distance between the end points of the calibrated stop-line of each intersection is greater than the distance between the end points of the pre-calibrated stop-line of each intersection.
In some embodiments, the location information of the intersection point comprises: image coordinates of the intersection point in an image coordinate system; the position information of the end point of the stop-line comprises: image coordinates of the endpoint in an image coordinate system; the calibration module 502 is configured to convert the image coordinates of the intersection point in the image coordinate system into world coordinates in a world coordinate system, convert the image coordinates of the endpoint in the image coordinate system into world coordinates in the world coordinate system, and adjust the world coordinates of the endpoint based on the world coordinates of the intersection point and the world coordinates of the endpoint.
In some embodiments, the stop-line has two end points, respectively a first end point and a second end point, and the world coordinates of the end points include: world coordinates of the first endpoint and world coordinates of the second endpoint; the calibration module 502 is configured to determine a first distance between the first end point and the intersection point based on the world coordinate of the first end point and the world coordinate of the intersection point, determine a second distance between the second end point and the intersection point based on the world coordinate of the second end point and the world coordinate of the intersection point, determine an end point to be adjusted from the first end point and the second end point according to the first distance and the second distance, and adjust the end point to be adjusted.
In some embodiments, the calibration module 502 is configured to determine the first endpoint as the endpoint to be adjusted if the first distance is smaller than the second distance; and if the first distance is greater than the second distance, determining the second endpoint as an endpoint to be adjusted.
In some embodiments, the calibration module 502 is configured to adjust the world coordinates of the endpoint to be adjusted based on a direction away from the intersection point according to a difference between the first distance and the second distance.
In some embodiments, if the first endpoint is an endpoint to be adjusted, the calibration module 502 is configured to determine a first difference between the first distance and the second distance, and determine a sum of the first difference and the world coordinate of the first endpoint as the adjusted world coordinate of the first endpoint;
if the second endpoint is an endpoint to be adjusted, the calibration module 502 is configured to determine a second difference between the first distance and the second distance, and determine a sum of the second difference and the world coordinate of the second endpoint as the adjusted world coordinate of the second endpoint.
In some embodiments, the generating module 503 is configured to construct a convex polygon according to the calibrated end points of the intersections, and determine a region in the convex polygon as the region of interest.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Electronic devices are intended to represent, among other things, various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
Fig. 6 is a schematic diagram according to a fifth embodiment of the present application, and as shown in fig. 6, an electronic device 600 provided in this embodiment includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the image processing method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the image processing method provided by the present application.
The memory 602, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiment of the present application (for example, the acquisition module 501, the calibration module 502, the generation module 503, and the communication module 504 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing, i.e., implements the image processing method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the image processing method, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, and these remote memories may be connected to the electronic device of the image processing method through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the image processing method may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the image processing method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output device Y04 may include a display device, an auxiliary lighting device (e.g., LED), a tactile feedback device (e.g., vibration motor), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Block-chain-Based Service Networks (BSNs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the conventional physical host and Virtual Private Server (VPS) service.
According to another aspect of the embodiments of the present application, there is also provided an image processing system including: an image processing apparatus as described in the above embodiment (such as the image processing apparatus 500 shown in fig. 5), or an electronic device as described in the above embodiment (such as the electronic apparatus 600 shown in fig. 6); further comprising:
the image capturing device disposed at each intersection in the intersection object is configured to capture an image of an opposite intersection and send the captured image to the image processing device (such as the image processing device 500 shown in fig. 5) according to the foregoing embodiment, or the electronic device (such as the electronic device 600 shown in fig. 6) according to the foregoing embodiment.
In some embodiments, the system further comprises:
the vehicle is used for receiving the region of interest transmitted by the image processing device (such as the image processing device 500 shown in fig. 5) described in the above embodiment or the electronic device (such as the electronic device 600 shown in fig. 6) described in the above embodiment, selecting the object to be identified according to the region of interest, identifying the object to be identified, obtaining the identification result, and controlling the vehicle to run based on the identification result.
For example, in conjunction with the application scenario shown in fig. 1, the image capturing device is a camera as shown in fig. 1, the camera at each intersection captures an image of an opposite intersection, and transmits the captured image of the opposite intersection to the image processing device (e.g., the image processing device 500 shown in fig. 5) described in the above embodiment, or the electronic device (e.g., the electronic device 600 shown in fig. 6) described in the above embodiment, the image processing device (e.g., the image processing device 500 shown in fig. 5) described in the above embodiment, or the electronic device (e.g., the electronic device 600 shown in fig. 6) described in the above embodiment generates an area of interest based on the method described in the above embodiment and transmits the area of interest to the vehicle, the vehicle selects an object to be identified based on the area of interest and identifies the object to be identified, so as to obtain an identification result, and controls the vehicle to travel based on the recognition result.
Fig. 7 is a schematic diagram of a sixth embodiment of the present application, and as shown in fig. 7, the present embodiment provides an image processing system 700 including:
each of the four image capturing devices 701 is configured to capture an image including an oncoming intersection, and send the captured image to the image processing device 500.
For example, the image capturing device 701 may be a camera as shown in fig. 1, or may be other devices that can be used to capture an image, and this embodiment is not limited.
The image processing apparatus 500 is configured to obtain a region of interest (the specific principle may be described in the fifth embodiment, and is not described herein again) based on the acquisition module 501, the calibration module 502, and the generation module 503, and send the region of interest to the vehicle 702 through the communication module 504.
The vehicle 702 selects an object to be recognized based on the region of interest, recognizes the object to be recognized, obtains a recognition result, and controls the vehicle to run based on the recognition result, such as controlling the vehicle to run at an accelerated speed, turn, and decelerate.
According to another aspect of the embodiment of the present application, an embodiment of the present application further provides a drive test device, where the drive test device may include the electronic device described in the above embodiment, for example, the electronic device shown in fig. 6 is included.
Illustratively, the roadside apparatus may include a communication component and the like in addition to the electronic apparatus, and the electronic apparatus may be integrated with the communication component or may be provided separately. The electronic device may acquire data, such as pictures and videos, from an image capture device (e.g., a roadside camera) for image video processing and data computation.
According to another aspect of the embodiment of the present application, an embodiment of the present application further provides a cloud control platform, including the electronic device according to the above embodiment, for example, including the electronic device shown in fig. 6.
For example, the cloud control platform performs processing at the cloud end, and an electronic device included in the cloud control platform may acquire data, such as pictures and videos, of an image acquisition device (such as a roadside camera), so as to perform image video processing and data calculation; the cloud control platform can also be called a vehicle-road cooperative management platform, an edge computing platform, a cloud computing platform, a central system, a cloud server and the like.
According to another aspect of the embodiments of the present application, there is also provided a computer program product including a computer program, which when executed by a processor implements the method according to any one of the embodiments above, such as implementing the method shown in fig. 2 or fig. 3.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. An image processing method comprising:
acquiring an image of each intersection in an intersection object, and determining a stop line and a center line of each intersection;
determining the position information of the intersection point between the central line and the stop line and the position information of the end point of the stop line aiming at each intersection;
calibrating the end point of the stop line based on the position information of the intersection point and the position information of the end point of the stop line to obtain the calibrated stop line of each intersection; generating an interested area of the intersection object according to the calibrated stop line of each intersection;
sending the region of interest to a vehicle, wherein the region of interest is used for selecting an object to be identified;
the position information of the intersection point includes: image coordinates of the intersection point in an image coordinate system; the position information of the end point of the stop line includes: image coordinates of the endpoint in an image coordinate system; calibrating an end point of the stop-line based on the position information of the intersection point and the position information of the end point of the stop-line, comprising:
converting the image coordinates of the intersection points in an image coordinate system into world coordinates in a world coordinate system, and converting the image coordinates of the end points in the image coordinate system into the world coordinates in the world coordinate system;
adjusting the world coordinates of the end points based on the world coordinates of the intersection points and the world coordinates of the end points;
the stop line has two end points which are respectively a first end point and a second end point, and the world coordinates of the end points comprise: world coordinates of the first endpoint and world coordinates of the second endpoint; adjusting the world coordinates of the endpoint based on the world coordinates of the intersection point and the world coordinates of the endpoint, including:
determining a first distance between the first end point and the intersection point based on the world coordinate of the first end point and the world coordinate of the intersection point, and determining a second distance between the second end point and the intersection point based on the world coordinate of the second end point and the world coordinate of the intersection point;
and determining an endpoint to be adjusted from the first endpoint and the second endpoint according to the first distance and the second distance, and adjusting the endpoint to be adjusted.
2. The method of claim 1, wherein a distance between end points of the calibrated stop-line for each intersection is greater than a distance between end points of the pre-calibrated stop-line for each intersection.
3. The method of claim 1, wherein determining an endpoint to adjust from the first and second endpoints according to the first and second distances comprises:
if the first distance is smaller than the second distance, determining the first endpoint as an endpoint to be adjusted;
and if the first distance is greater than the second distance, determining the second endpoint as the endpoint to be adjusted.
4. The method of claim 3, wherein adjusting the endpoint to be adjusted comprises:
and adjusting the world coordinate of the endpoint to be adjusted based on the direction far away from the intersection point according to the difference value between the first distance and the second distance.
5. The method of claim 3, wherein if the first endpoint is an endpoint to be adjusted, adjusting the determined endpoint to be adjusted comprises: determining a first difference value between the first distance and the second distance, and determining the sum of the first difference value and the world coordinate of the first endpoint as the world coordinate of the adjusted first endpoint;
if the second endpoint is the endpoint to be adjusted, adjusting the determined endpoint to be adjusted, including: and determining a second difference value between the first distance and the second distance, and determining the sum of the second difference value and the world coordinate of the second endpoint as the adjusted world coordinate of the second endpoint.
6. The method of any of claims 1 to 5, wherein generating a region of interest for the intersection object from the calibrated endpoints for each intersection comprises:
and constructing a convex polygon according to the calibrated end points of each intersection, and determining the region in the convex polygon as the region of interest.
7. An image processing apparatus comprising:
the system comprises an acquisition module, a judgment module and a control module, wherein the acquisition module is used for acquiring images of each intersection in an intersection object and determining a stop line and a center line of each intersection;
the calibration module is used for calibrating the end point of the stop line of each intersection according to the stop line and the center line of each intersection to obtain the calibrated stop line of each intersection;
the generating module is used for generating an interested area of the intersection object according to the calibrated stop line of each intersection;
the communication module is used for sending the region of interest to a vehicle, and the region of interest is used for selecting an object to be identified;
the calibration module is used for determining the position information of the intersection point between the central line and the stop line aiming at each intersection, determining the position information of the end point of the stop line, and calibrating the end point of the stop line based on the position information of the intersection point and the position information of the end point of the stop line;
wherein the position information of the intersection point includes: image coordinates of the intersection point in an image coordinate system; the position information of the end point of the stop line includes: image coordinates of the endpoint in an image coordinate system; the calibration module is used for converting the image coordinates of the intersection points in the image coordinate system into world coordinates in a world coordinate system, converting the image coordinates of the end points in the image coordinate system into world coordinates in the world coordinate system, and adjusting the world coordinates of the end points based on the world coordinates of the intersection points and the world coordinates of the end points;
the stop line has two end points which are respectively a first end point and a second end point, and the world coordinates of the end points comprise: world coordinates of the first endpoint and world coordinates of the second endpoint; the calibration module is used for determining a first distance between the first end point and the intersection point based on the world coordinate of the first end point and the world coordinate of the intersection point, determining a second distance between the second end point and the intersection point based on the world coordinate of the second end point and the world coordinate of the intersection point, determining an end point to be adjusted from the first end point and the second end point according to the first distance and the second distance, and adjusting the end point to be adjusted.
8. The apparatus of claim 7, wherein a distance between end points of the calibrated stop-line for each intersection is greater than a distance between end points of the pre-calibrated stop-line for each intersection.
9. The apparatus of claim 7, wherein the calibration module is configured to determine the first endpoint as the endpoint to be adjusted if the first distance is less than the second distance; and if the first distance is greater than the second distance, determining the second endpoint as the endpoint to be adjusted.
10. The apparatus of claim 9, wherein the calibration module is configured to adjust world coordinates of the endpoint to be adjusted based on a direction away from an intersection point according to a difference between the first distance and the second distance.
11. The apparatus of claim 9, wherein if the first endpoint is an endpoint to be adjusted, the calibration module is configured to determine a first difference between the first distance and the second distance, and determine a sum of the first difference and the world coordinate of the first endpoint as the world coordinate of the adjusted first endpoint;
and if the second endpoint is the endpoint to be adjusted, the calibration module is configured to determine a second difference between the first distance and the second distance, and determine a sum of the second difference and the world coordinate of the second endpoint as the adjusted world coordinate of the second endpoint.
12. The apparatus according to any one of claims 7 to 11, wherein the generating module is configured to construct a convex polygon from the calibrated end points of each intersection, and determine a region in the convex polygon as the region of interest.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. An image processing system comprising: the apparatus of any one of claims 7 to 12, or the electronic device of claim 13; further comprising:
an image capturing device provided at each intersection in the intersection object, for capturing an image of an oncoming intersection and transmitting the captured image to the device according to any one of claims 7 to 12, or to the electronic apparatus according to claim 13.
15. The system of claim 14, further comprising:
and the vehicle is used for receiving the region of interest sent by the device or the electronic equipment, selecting the object to be identified according to the region of interest, identifying the object to be identified, obtaining an identification result and controlling the vehicle to run based on the identification result.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
17. A roadside apparatus comprising the electronic apparatus of claim 13.
18. A cloud controlled platform comprising the electronic device of claim 13.
CN202011517415.8A 2020-12-21 2020-12-21 Image processing method, device and system, road side equipment and cloud control platform Active CN112712731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011517415.8A CN112712731B (en) 2020-12-21 2020-12-21 Image processing method, device and system, road side equipment and cloud control platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011517415.8A CN112712731B (en) 2020-12-21 2020-12-21 Image processing method, device and system, road side equipment and cloud control platform

Publications (2)

Publication Number Publication Date
CN112712731A CN112712731A (en) 2021-04-27
CN112712731B true CN112712731B (en) 2022-08-12

Family

ID=75544881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011517415.8A Active CN112712731B (en) 2020-12-21 2020-12-21 Image processing method, device and system, road side equipment and cloud control platform

Country Status (1)

Country Link
CN (1) CN112712731B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102815299A (en) * 2011-06-09 2012-12-12 通用汽车环球科技运作有限责任公司 Lane sensing through lane marker identification for lane centering/keeping
JP2014053866A (en) * 2012-09-10 2014-03-20 Ricoh Co Ltd Image forming apparatus, image forming program and image forming method
CN105809658A (en) * 2014-10-20 2016-07-27 三星Sds株式会社 Method and apparatus for setting region of interest
CN106778593A (en) * 2016-12-11 2017-05-31 北京联合大学 A kind of track level localization method based on the fusion of many surface marks
CN108933936A (en) * 2017-05-25 2018-12-04 通用汽车环球科技运作有限责任公司 Method and apparatus for camera calibration

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102427542B (en) * 2011-09-28 2014-07-30 深圳超多维光电子有限公司 Method and device for processing three-dimensional image and terminal equipment thereof
CN106251651B (en) * 2016-08-24 2019-01-18 安徽科力信息产业有限责任公司 A kind of crossing traffic signal control method and system using plane cognition technology
CN110324532B (en) * 2019-07-05 2021-06-18 Oppo广东移动通信有限公司 Image blurring method and device, storage medium and electronic equipment
CN111079541B (en) * 2019-11-19 2022-03-08 重庆大学 Road stop line detection method based on monocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102815299A (en) * 2011-06-09 2012-12-12 通用汽车环球科技运作有限责任公司 Lane sensing through lane marker identification for lane centering/keeping
JP2014053866A (en) * 2012-09-10 2014-03-20 Ricoh Co Ltd Image forming apparatus, image forming program and image forming method
CN105809658A (en) * 2014-10-20 2016-07-27 三星Sds株式会社 Method and apparatus for setting region of interest
CN106778593A (en) * 2016-12-11 2017-05-31 北京联合大学 A kind of track level localization method based on the fusion of many surface marks
CN108933936A (en) * 2017-05-25 2018-12-04 通用汽车环球科技运作有限责任公司 Method and apparatus for camera calibration

Also Published As

Publication number Publication date
CN112712731A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN111401208B (en) Obstacle detection method and device, electronic equipment and storage medium
CN112132829A (en) Vehicle information detection method and device, electronic equipment and storage medium
CN110738183B (en) Road side camera obstacle detection method and device
CN111722245B (en) Positioning method, positioning device and electronic equipment
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
CN110659600B (en) Object detection method, device and equipment
CN111578839B (en) Obstacle coordinate processing method and device, electronic equipment and readable storage medium
CN111612753B (en) Three-dimensional object detection method and device, electronic equipment and readable storage medium
CN110595490B (en) Preprocessing method, device, equipment and medium for lane line perception data
CN111536984A (en) Positioning method and device, vehicle-end equipment, vehicle, electronic equipment and positioning system
CN111666876A (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN111703371B (en) Traffic information display method and device, electronic equipment and storage medium
CN111222579A (en) Cross-camera obstacle association method, device, equipment, electronic system and medium
CN112668428A (en) Vehicle lane change detection method, roadside device, cloud control platform and program product
CN111652112A (en) Lane flow direction identification method and device, electronic equipment and storage medium
CN111540023B (en) Monitoring method and device of image acquisition equipment, electronic equipment and storage medium
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN111601013A (en) Method and apparatus for processing video frames
CN112509058B (en) External parameter calculating method, device, electronic equipment and storage medium
CN110798681B (en) Monitoring method and device of imaging equipment and computer equipment
CN112712731B (en) Image processing method, device and system, road side equipment and cloud control platform
CN111814651A (en) Method, device and equipment for generating lane line
CN112651983B (en) Splice graph identification method and device, electronic equipment and storage medium
CN110689575B (en) Image collector calibration method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211025

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant