CN111709354B - Method and device for identifying target area, electronic equipment and road side equipment - Google Patents
Method and device for identifying target area, electronic equipment and road side equipment Download PDFInfo
- Publication number
- CN111709354B CN111709354B CN202010536739.XA CN202010536739A CN111709354B CN 111709354 B CN111709354 B CN 111709354B CN 202010536739 A CN202010536739 A CN 202010536739A CN 111709354 B CN111709354 B CN 111709354B
- Authority
- CN
- China
- Prior art keywords
- distance
- identifier
- markers
- determined
- location information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 239000003550 marker Substances 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 17
- 238000012544 monitoring process Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 6
- 238000012806 monitoring device Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000004888 barrier function Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the disclosure relates to a method, a device, electronic equipment, a computer-readable storage medium and drive test equipment for identifying a target area, and relates to the field of intelligent transportation. The method includes obtaining location information for a set of identifiers detected in a current frame of the video. The method further includes determining a distance between every two of the at least some of the set of markers based on the location information, and determining a connected domain based on the distance, the connected domain including a number of markers greater than a first threshold number, the distance between every two of the markers included in the connected domain being less than a threshold distance, thereby determining an area surrounded by the markers included in the connected domain as a target area. According to the technical scheme, road condition information can be timely and accurately updated, target area information is broadcasted to road side equipment, a cloud platform, vehicles and the like through a vehicle-road cooperation V2X technology, so that reliable data support is provided for path planning of the vehicles, and user experience is improved.
Description
Technical Field
Embodiments of the present disclosure relate generally to the field of intelligent transportation and, more particularly, relate to a method, apparatus, electronic device, computer-readable storage medium, drive-test device, and computer program product for identifying a target area.
Background
For an autonomous vehicle or other general vehicles, it would be very convenient for the user to plan the travel route of the vehicle if a target area (e.g., construction area, etc.) that may block traffic is found ahead of time. However, for an autonomous unmanned vehicle, the sensor carried by itself has a limited range of perception; for a common vehicle, the viewing distance and the viewing angle of the driver are limited. Therefore, the target area on the road surface cannot be accurately determined, which increases difficulty in subsequent path planning.
Disclosure of Invention
According to an example embodiment of the present disclosure, a scheme for identifying a target area is provided.
In a first aspect of the present disclosure, a method for identifying a target area is provided. The method may include obtaining location information for a set of identifiers detected in a current frame of the video. The method further includes determining a distance between each two of at least a portion of the set of markers based on the location information. The method may further include determining a connected domain based on the distance, the connected domain including a number of identifiers greater than a first threshold number, the connected domain including identifiers having a distance between every two of the identifiers that is less than a threshold distance. Further, the method may further include determining an area surrounded by the marker included in the connected domain as the target area.
In a second aspect of the present disclosure, there is provided an apparatus for identifying a target area, comprising: a position information acquisition module configured to acquire position information of a set of markers detected in a current frame of a video; a distance determination module configured to determine a distance between each two of at least a portion of the set of markers based on the location information; a connected domain determining module configured to determine a connected domain including a number of markers greater than a first threshold number based on the distance, a distance between each two of the markers included in the connected domain being less than a threshold distance; and a target area determination module configured to determine an area surrounded by the identifier included in the connected domain as the target area.
In a third aspect of the present disclosure, an electronic device is provided that includes one or more processors; and storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement a method according to the first aspect of the present disclosure.
In a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, there is provided a roadside apparatus comprising an electronic apparatus as set forth in the third aspect of the present disclosure.
In a sixth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
It should be understood that what is described in this summary is not intended to limit the critical or essential features of the embodiments of the disclosure nor to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which various embodiments of the present disclosure may be implemented;
FIG. 2 shows a schematic diagram of a monitoring scenario according to an embodiment of the present disclosure;
FIG. 3A shows a schematic diagram of a frame in a surveillance video according to another embodiment of the disclosure;
FIG. 3B shows a schematic diagram of the individual markers identified in the frame of FIG. 3A in a planar coordinate system;
FIG. 4 illustrates a flow chart of a process for identifying a target area according to an embodiment of the present disclosure;
FIG. 5 illustrates a block diagram of an apparatus for identifying a target area according to an embodiment of the present disclosure; and
FIG. 6 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As mentioned above, an autonomous vehicle needs to obtain information about road conditions through a roadside sensing device to generate a more reasonable driving strategy. It should be appreciated that because of the possible construction, damage, collapse, etc. near or on the road, identifying the target area where these conditions occur and re-planning the driving route in time is critical to the autonomous vehicle.
There are two main types of conventional identification schemes. The first scheme is manual collection and reporting. This is the most primitive, but simple and reliable method. That is, a person dedicated to driving a car to patrol, after finding the position of the target area, the specific position of the target area is obtained through a total station or a handheld GPS, and recording and reporting are performed. However, this solution is cost-prohibitive and inefficient, and especially for larger cities, it takes too long to perform a full road condition check, the update rate is too slow, and real-time update is not supported.
In order to overcome the defects of the first scheme, the second scheme is a pedestrian and driver reporting mode. That is, the target area is reported by means of pedestrians or drivers of traveling vehicles traveling on the road every day. However, coverage of this scheme cannot be guaranteed. Moreover, there may be a false negative for roads with fewer pedestrians and vehicles.
In recent years, unmanned technology has gradually become a new corner. More and more businesses are beginning to invest in unmanned research and production. It is anticipated that partially autonomous vehicles will appear on the road for some time in the future. How to provide reliable road surface information for these autonomous vehicles is a current challenge.
According to an embodiment of the present disclosure, a scheme of identifying a target area is proposed. For example, position information of the identified markers may be obtained, and when it is determined that the distances between a marker and some markers are smaller than the threshold distance, it may be determined that the marker and some markers belong to the same connected domain. Markers belonging to the same connected domain are likely to be used to surround the target area. For a target area there must be a threshold distance for which the smallest of the distances of the marker to all other markers must be less than the threshold distance. The region constituted by a plurality of markers having a distance between the markers smaller than the threshold distance may be generally referred to as a connected domain. The connected domain, i.e. the target area associated therewith, is determined.
Thus, the road surface information can be updated in real time by reporting the position information and the time stamp of each of the markers contained in the target area. In addition, the scheme relies on the road side sensor and the edge computing node in the road side equipment to detect whether the target area exists on the road in real time, and the road side equipment can send various information of the detected target area to a cloud platform, other road side equipment, vehicles and the like through a vehicle-road cooperative v2x technology. In this way, the automatic driving vehicle or other common vehicles can acquire the road surface information updated in real time, thereby bringing great convenience to the planning of the driving path.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. FIG. 1 illustrates a schematic diagram of an example environment 100 in which various embodiments of the present disclosure may be implemented. As shown in fig. 1, an example environment 100 includes a surveillance video, a computing device 120, and a target area 130. The surveillance video includes a plurality of frames, such as frame 110 and frame 111 preceding frame 110. For convenience of description, the frame 110 will hereinafter be also referred to as "current frame 110", and the frame 111 will hereinafter be also referred to as "previous frame 111". The target area 130 is used to indicate an area that a vehicle, including a vehicle, cannot pass through, which area may affect the planning of the driving route.
As shown in fig. 1, computing device 120 identifies target area 130 by processing the surveillance video. In some embodiments, the computing device 120 may be located at the cloud for identifying the identifier for surrounding the target area according to a particular identification model. It should be understood that the "identifier" described herein refers to an object having a distinct topographical feature for identifying a target area (such as, a construction area, a collapse area, etc.), which may be a cone or a cylindrical barrel with a speckle pattern, etc. In some embodiments, computing device 120 may include, but is not limited to, a personal computer, a server computer, a hand-held or laptop device, a mobile device (such as a mobile phone, a Personal Digital Assistant (PDA), a media player, etc.), a multiprocessor system, a consumer electronics, a minicomputer, a mainframe computer, a distributed computing environment that includes any of the above systems or devices, and the like.
Through processing by computing device 120, a particular portion of current frame 110 may be determined to be target region 130. For example, each frame in the video may be monitored in real time, and when a marker is found to be present in the current frame 110, location information for each marker may be determined. When it is determined that the distances between a marker and some markers are smaller than the threshold distance, it may be determined that the marker and some markers belong to the same communicating region. Thereby, the area surrounded by the markers belonging to the connected domain can be determined as the target area 130.
In this way, the road surface information can be updated in real time by reporting the position information and the time stamp of each of the markers contained in the target area 130. Because the road surface information can be updated in time, an automatic driving vehicle or other common vehicles can realize a more accurate and convenient path planning scheme.
Fig. 2 shows a schematic diagram of a monitoring scenario 200 according to an embodiment of the present disclosure. In this monitoring scenario 200, there is a drive test device comprising a drive test monitoring apparatus 210 and a computing device 120, the computing device 120 may be communicatively connected to the drive test monitoring apparatus 210 by wired or wireless means. As shown in fig. 2, the road side monitoring device 210 may be provided at a location such as a monitor for taking a photograph of a violation for monitoring road surface information of a substantially straight road. It should be appreciated that the road is typically monitored by a plurality of drive test monitoring apparatuses at different locations, and different monitoring results may be associated and fused by the computing device 120, and the target area is finally determined. The road side device can report various information of the detected target area to the cloud platform through a vehicle-road cooperative v2x technology, or broadcast the information to other road side devices or vehicles and the like.
As shown in fig. 2, the roadside monitoring device 210 may capture markers such as cones 221, 222. It should be appreciated that the roadside monitoring apparatus 210 may also photograph moving objects (not shown) such as persons, bicycles, motorcycles, and the like. It should also be appreciated that the scheme shown in fig. 2 of the present disclosure only shows the cones 221, 222 for ease of discussion. However, the present disclosure is not limited to the use of the roadside monitoring device 210 to monitor the cones 221, 222, and other markers (e.g., cylindrical barrels having a speckle pattern, etc.) for indicating the target area, which are present in the monitoring scene of the roadside monitoring device 210, may all be monitored simultaneously by the roadside monitoring device 210.
One frame of the monitoring video photographed by the roadside monitoring device 210 in fig. 2 will be described in detail below. Fig. 3A shows a schematic diagram of a frame 300A in a surveillance video according to another embodiment of the disclosure. In FIG. 3A, the identifiers 310-1, 310-2, 310-3, 310-4, and 310-5 (hereinafter collectively referred to as 310) are all present in the frame 300A. The average person can generally determine directly that there is a target area in frame 300A consisting of markers 310-1, 310-2, 310-3, 310-4, and that marker 310-5 is isolated from the target area. However, to conserve human resources, computing device 120 is configured to undertake the task of identifying the target area.
To detect the identifier 310 from the frame 300A, the computing device 120 first needs to identify the identifier 310 in the frame 300A. Fig. 3B shows a schematic diagram of a positional relationship diagram 300B of the individual markers identified in the frame of fig. 3A in a planar coordinate system. It should be understood that the positional relationship diagram 300B is obtained by performing two-dimensional spatial conversion on the frame 300A. In FIG. 3B, a position relationship diagram 300B may be detected using a pre-trained two-dimensional object detection model. For example, when the presence of the identifier 310 in the positional relationship diagram 300B is detected, a detection frame (e.g., a box surrounding the identifier 310 in fig. 3B) may be added to the identifier 310. It will be appreciated that a rectangular detection frame is merely exemplary, and that the detection frame may be circular, diamond-shaped, or just like that which would surround the perimeter contour of the marker, etc., as desired.
Thereafter, the computing device 120 may determine location information for each identifier based on the determined midpoint of each detection frame. It will be appreciated that the position information of the respective markers may also be determined based on the lower left corner or other points of the respective detection frames. The computing device 120 may determine whether a target area exists and the location of the target area based on the location information of the identifiers.
The technical solutions described above are only for example and do not limit the invention. It should be understood that the entire monitoring system may also be arranged in other ways and connections. In order to more clearly explain the principle of the above scheme, a process of identifying a target area will be described in more detail with reference to fig. 4.
Fig. 4 illustrates a flow chart of a process 400 for identifying a target area according to an embodiment of the present disclosure. In some embodiments, the method 400 may be implemented in the computing device 120 shown in fig. 1. A process 400 for identifying a target area according to an embodiment of the present disclosure is now described with reference to fig. 4. For ease of understanding, specific data set forth in the following description are intended to be exemplary and are not intended to limit the scope of the disclosure.
At 402, computing device 120 may obtain location information for a set of identifiers 310 detected in a current frame 110 of video. As an example, if the computing device 120 determines that the number of identifiers 310 in the current frame 110 is greater than or equal to a threshold number, it is illustrated that these identifiers may be used to surround the target area. Thus, the computing device 120 may determine location information for these identifiers 310. In this way, the markers surrounding the target area can be accurately determined in combination with a priori information. Alternatively or additionally, if the computing device 120 determines that the number of identifiers 310 in a predetermined number of consecutive frames in the video (as shown in fig. 1, consecutive previous frames 111 up to the current frame 110) is greater than or equal to a threshold number, it indicates that the identifiers 310 are steadily present in the video over a period of time, and that the identifiers 310 are likely to be used to surround the target area. Thus, the computing device 120 may determine location information for these identifiers 310. In this way, unstable recognition results can be filtered out.
In certain embodiments, the identifier may be a cone as shown in fig. 2, 3A, and 3B. Alternatively or additionally, the marker may also be a cylindrical barrel, a barrier ball, or the like, that functions as a barrier and has a particular shape and color. Such identifiers each have a particular shape that facilitates identification by computing device 120.
At 404, the computing device 120 may determine a distance between each two of at least a portion of the set of markers 310 based on the location information. As an example, computing device 120 may calculate, traversably, a distance between identifier 310-1 in identifier 310 and each of the other identifiers, a distance between identifier 310-2 in identifier 310 and each of the other identifiers, a distance between identifier 310-3 in identifier 310 and each of the other identifiers, a distance between identifier 310-4 in identifier 310 and each of the other identifiers, and a distance between identifier 310-5 in identifier 310 and each of the other identifiers.
In some embodiments, if the first distance between the first identifier and the second distance between the first identifier and the third identifier in the set of identifiers are both less than the threshold distance, the secondThe distance between the marker and the third marker is no longer determined. As an example, if the computing device 120 finds that the distance between the identifier 310-1 and each of the other identifiers 310-2, 310-3, 310-4 is less than the threshold distance when calculating the distance between the identifier 310-1 and each of the other identifiers 310, the computing device 120 may omit the calculation of the distance between the identifier 310-2 and each of the other identifiers 310-1, 310-3, 310-4 when calculating the distance between the identifier 310-2 and each of the other identifiers 310. More generally, assuming a total of (m1+m2+ … mn) markers, n is the number of target regions, the original traversal count is given by: calcolatejtimes 1= ((m1+m2+ … +mn) 2 - (m1+m2+ … +mn))/2, and the number of calculations shown in this embodiment is the following formula two:
Calculate_times2=
calif. times1- [ (m 1-2) + (m 1-3) + … +1] - … - [ (mn-2) + (mn-3) + … +1]. It follows that the degree of saving in computational effort by the computational approach shown in this embodiment is proportional to the square of the number of target regions. In this way, computation may be saved for computing device 120.
At 406, computing device 120 may determine a connected domain based on the distance determined above. The connected domain comprises a number of identifiers greater than a first threshold number, e.g. 4. Further, the connected domain includes a distance between each two of the markers that is less than a threshold distance. For a target area there must be a threshold distance for which the smallest of the distances of the marker to all other markers must be less than the threshold distance. That is, for one marker, if the marker is present in a certain target area, then it is certain that another marker is present in the target area such that the distance between the two markers is minimized. Additionally or alternatively, the threshold distance may be determined manually based on a priori information.
At 408, computing device 120 may determine an area surrounded by the identifier included in the connected domain as target area 130. In this way, the computing device 120 achieves identification of the target area surrounded by the markers by identifying a plurality of markers and performing distance calculation based on the position information thereof to determine the connected domain. In this way, the computing device 120 can accurately update the road condition information in time, thereby providing reliable data support for the path planning of the vehicle.
In some embodiments, the computing device 120 may also compare the determined current contour of the target region 130 to a previous contour of a previous target region determined based on a previous frame 111 in the video. If the ratio of the overlap of the current contour and the previous contour is less than or equal to a threshold ratio (e.g., 80%), the previous target region may be updated with the target region 130, and vice versa, no change in the target region 130 occurs. In particular, the computing device 120 may traverse all previous contours that already exist, find the previous contour that coincides most with the current contour, and note the repetition rate between the two. In this way, computing device 120 may determine whether the newly determined target region has changed and report the change.
It should be appreciated that the target area identifying method of the present disclosure has an advantage over the conventional identifying method in that the target area identifying method of the present disclosure does not need to collect and report information by manual means as in the conventional method every time a new target area appears in the surveillance video. The reason is that the present disclosure finds a plurality of markers suitable for composing the target region by determining the connected domain, thereby completing the identification of the target region. Therefore, the road equipped with the monitoring camera equipment can be fully covered on the premise of not carrying out excessive manual intervention, so that timely and accurate road information is provided for a user, the manpower resource cost is saved, and the user experience is improved.
Fig. 5 illustrates a block diagram of an apparatus 500 for identifying a target region 130 according to an embodiment of the disclosure. As shown in fig. 5, the apparatus 500 may include: a location information acquisition module 502 configured to determine location information of a set of identifiers detected in a current frame of the video; a distance determination module 504 configured to determine a distance between each two of at least a portion of the set of markers based on the location information; a connected domain determining module 506 configured to determine a connected domain based on the distance, the connected domain including a number of markers greater than a first threshold number, a distance between every two of the markers included in the connected domain being less than a threshold distance; and a target region determination module 508 configured to determine a region surrounded by the identifier included in the connected domain as a target region.
In some embodiments, the location information acquisition module 502 is configured to: the location information is determined if it is determined that the number of the set of identifiers in a predetermined number of consecutive frames in the video is greater than or equal to a second threshold number.
In some embodiments, the location information acquisition module 502 is configured to: the location information is determined if the number of the set of identifiers in the current frame is determined to be greater than or equal to a third threshold number.
In some embodiments, the apparatus may further comprise: a contour comparison module configured to compare the determined current contour of the target region with a previous contour of a previous target region determined based on a previous frame in the video; and an updating module configured to update the previous target region with the target region if a ratio of overlapping of the current contour and the previous contour is less than or equal to a threshold ratio.
In certain embodiments, if a first distance between a first marker and a second marker in the set of markers and a second distance between the first marker and a third marker are both less than a threshold distance, the distance between the second marker and the third marker is no longer determined.
In certain embodiments, the marker may include at least one of a cone, a cylinder, and a barrier ball.
Fig. 6 illustrates a block diagram of a computing device 600 capable of implementing various embodiments of the disclosure. Device 600 may be used to implement computing device 120 of fig. 1. As shown, the device 600 includes a Central Processing Unit (CPU) 601 that can perform various suitable actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 602 or loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Processing unit 601 performs the various methods and processes described above, such as processes 200, 300, and 400. For example, in some embodiments, processes 200, 300, and 400 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by CPU 601, one or more of the steps of processes 200, 300, and 400 described above may be performed. Alternatively, in other embodiments, CPU 601 may be configured to perform processes 200, 300, and 400 in any other suitable manner (e.g., by means of firmware).
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), etc.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Moreover, although operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
Claims (15)
1. A method for identifying a target area, comprising:
acquiring position information of a group of identifiers detected in a current frame of a video;
determining a distance between each two of at least a portion of the set of markers based on the location information;
determining a connected domain comprising a number of markers greater than a first threshold number based on the distance, the connected domain comprising a distance between every two of the markers that is less than a threshold distance; and
and determining an area surrounded by the marker included in the connected domain as the target area.
2. The method of claim 1, wherein obtaining location information for the set of markers comprises:
the location information is determined if it is determined that the number of the set of identifiers in a predetermined number of consecutive frames in the video is greater than or equal to a second threshold number.
3. The method of claim 1, wherein obtaining location information for the set of markers comprises:
the location information is determined if the number of the set of identifiers in the current frame is determined to be greater than or equal to a third threshold number.
4. The method of claim 1, further comprising:
comparing the determined current contour of the target region with a previous contour of a previous target region determined based on a previous frame in the video; and
if the ratio of the overlap of the current contour and the previous contour is less than or equal to a threshold ratio, the previous target region is updated with the target region.
5. The method of claim 1, wherein if a first distance between a first identifier and a second identifier in the set of identifiers and a second distance between the first identifier and a third identifier are both less than a threshold distance, the distance between the second identifier and the third identifier is no longer determined.
6. The method of claim 1, wherein the set of markers comprises at least one of:
a cone barrel;
a cylindrical barrel; and
roadblock ball.
7. An apparatus for identifying a target area, comprising:
a position information acquisition module configured to acquire position information of a set of markers detected in a current frame of a video;
a distance determination module configured to determine a distance between each two of at least a portion of the set of markers based on the location information;
a connected domain determining module configured to determine a connected domain including a number of markers greater than a first threshold number based on the distance, a distance between each two of the markers included in the connected domain being less than a threshold distance; and
a target region determination module configured to determine a region surrounded by the identifier included in the connected domain as the target region.
8. The apparatus of claim 7, wherein the location information acquisition module is configured to:
the location information is determined if it is determined that the number of the set of identifiers in a predetermined number of consecutive frames in the video is greater than or equal to a second threshold number.
9. The apparatus of claim 7, wherein the location information acquisition module is configured to:
the location information is determined if the number of the set of identifiers in the current frame is determined to be greater than or equal to a third threshold number.
10. The apparatus of claim 7, further comprising:
a contour comparison module configured to compare the determined current contour of the target region with a previous contour of a previous target region determined based on a previous frame in the video; and
an updating module configured to update the previous target region with the target region if a ratio of the overlap of the current contour and the previous contour is less than or equal to a threshold ratio.
11. The apparatus of claim 7, wherein if a first distance between a first identifier and a second identifier in the set of identifiers and a second distance between the first identifier and a third identifier are both less than a threshold distance, the distance between the second identifier and the third identifier is no longer determined.
12. The apparatus of claim 7, wherein the set of identifiers comprises at least one of:
a cone barrel;
a cylindrical barrel; and
roadblock ball.
13. An electronic device, the electronic device comprising:
one or more processors; and
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1-6.
14. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of any of claims 1-6.
15. A roadside device comprising the electronic device of claim 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010536739.XA CN111709354B (en) | 2020-06-12 | 2020-06-12 | Method and device for identifying target area, electronic equipment and road side equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010536739.XA CN111709354B (en) | 2020-06-12 | 2020-06-12 | Method and device for identifying target area, electronic equipment and road side equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111709354A CN111709354A (en) | 2020-09-25 |
CN111709354B true CN111709354B (en) | 2023-07-28 |
Family
ID=72540256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010536739.XA Active CN111709354B (en) | 2020-06-12 | 2020-06-12 | Method and device for identifying target area, electronic equipment and road side equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111709354B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114137977B (en) * | 2021-11-29 | 2024-02-27 | 北京云迹科技股份有限公司 | Moving method, robot, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108628206A (en) * | 2017-03-17 | 2018-10-09 | 通用汽车环球科技运作有限责任公司 | Road construction detecting system and method |
CN110991320A (en) * | 2019-11-29 | 2020-04-10 | 北京百度网讯科技有限公司 | Road condition detection method and device, electronic equipment and storage medium |
CN111063192A (en) * | 2019-12-31 | 2020-04-24 | 戴姆勒股份公司 | Operating area compliance judgment method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140160295A1 (en) * | 2012-12-06 | 2014-06-12 | Honda Motor Co., Ltd. | Road condition detection |
-
2020
- 2020-06-12 CN CN202010536739.XA patent/CN111709354B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108628206A (en) * | 2017-03-17 | 2018-10-09 | 通用汽车环球科技运作有限责任公司 | Road construction detecting system and method |
CN110991320A (en) * | 2019-11-29 | 2020-04-10 | 北京百度网讯科技有限公司 | Road condition detection method and device, electronic equipment and storage medium |
CN111063192A (en) * | 2019-12-31 | 2020-04-24 | 戴姆勒股份公司 | Operating area compliance judgment method and system |
Non-Patent Citations (1)
Title |
---|
齐麟.面向低速自动驾驶汽车的路径规划决策.中国优秀硕士论文全文数据库.2019,工程科技II辑(第5期),C035-354. * |
Also Published As
Publication number | Publication date |
---|---|
CN111709354A (en) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287276B (en) | High-precision map updating method and device and storage medium | |
US10403138B2 (en) | Traffic accident warning method and traffic accident warning apparatus | |
CN105793669B (en) | Vehicle position estimation system, device, method, and camera device | |
Koukoumidis et al. | Leveraging smartphone cameras for collaborative road advisories | |
WO2021155685A1 (en) | Map updating method, apparatus and device | |
CN103325255B (en) | The method of region transportation situation detection is carried out based on photogrammetric technology | |
CN106097755A (en) | For identifying the method parked place and/or vacate place | |
CN110942038B (en) | Traffic scene recognition method and device based on vision, medium and electronic equipment | |
CN109872530B (en) | Road condition information generation method, vehicle-mounted terminal and server | |
US10728536B2 (en) | System and method for camera commissioning beacons | |
WO2021253245A1 (en) | Method and device for identifying vehicle lane changing tendency | |
WO2021036210A1 (en) | Positioning method and positioning device | |
JP2023508705A (en) | Data transmission method and device | |
CN111460986A (en) | Lane line processing method and device | |
CN108694381A (en) | Object positioning method and object trajectory method for tracing | |
CN111709354B (en) | Method and device for identifying target area, electronic equipment and road side equipment | |
CN112598668B (en) | Defect identification method and device based on three-dimensional image and electronic equipment | |
CN111709357B (en) | Method and device for identifying target area, electronic equipment and road side equipment | |
WO2024098992A1 (en) | Vehicle reversing detection method and apparatus | |
CN113808414A (en) | Road load determination method, device and storage medium | |
CN111709355B (en) | Method and device for identifying target area, electronic equipment and road side equipment | |
CN111709356B (en) | Method and device for identifying target area, electronic equipment and road side equipment | |
CN112258881B (en) | Vehicle management method based on intelligent traffic | |
JP7232727B2 (en) | Map data management device and map data management method | |
CN112784707B (en) | Information fusion method, information fusion device, integrated detection equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211020 Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Address before: 100094 2 / F, baidu building, No.10 Shangdi 10th Street, Haidian District, Beijing Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |