CN112183244A - Scene establishing method and device, storage medium and electronic device - Google Patents

Scene establishing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112183244A
CN112183244A CN202010956270.5A CN202010956270A CN112183244A CN 112183244 A CN112183244 A CN 112183244A CN 202010956270 A CN202010956270 A CN 202010956270A CN 112183244 A CN112183244 A CN 112183244A
Authority
CN
China
Prior art keywords
scene
road surface
information
surface area
target road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010956270.5A
Other languages
Chinese (zh)
Inventor
吴建琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010956270.5A priority Critical patent/CN112183244A/en
Publication of CN112183244A publication Critical patent/CN112183244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Abstract

The invention provides a scene establishing method and device, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a snapshot image of a target road surface area by an image acquisition device; determining scene information of a target road surface area from a segmentation composite image corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area; the technical scheme is adopted, the problems that in the related technology, structured output cannot be carried out on the scene, and then events under the scene cannot be accurately processed are solved, so that the scene information at least used for indicating the road type is determined from the snapshot image, the scene is established according to the scene information, and then the events under the scene are accurately processed.

Description

Scene establishing method and device, storage medium and electronic device
Technical Field
The invention relates to the technical field of image processing, in particular to a scene establishing method and device, a storage medium and an electronic device.
Background
Scene reconstruction is one of the common approaches in the field of computer vision, by using camera lenses and image processing techniques, parsing and structured information derivation of an image scene. At present, in the prior art, scene reconstruction mainly depends on a laser radar or a binocular camera to acquire three-dimensional data for performing three-dimensional reconstruction on a spatial environment, and scene analysis information is generated for a detection area of a detector.
In the related technology, a traffic scene thermal infrared semantic generation method based on a twin semantic network is disclosed, and the design key points are that the twin semantic network is utilized, a theoretical framework of a countermeasure network is generated based on circulation, a network structure is reasonably designed, and a residual error module and a cavity convolution are introduced to realize higher-quality feature extraction and semantic generation so as to generate a more stable thermal infrared semantic traffic image; however, the method only outputs a better-quality scene picture as a technical end point, and cannot output structured information to image data, so that secondary application of traffic time judgment based on the method cannot be developed; in addition, the related art also discloses a traffic scene analysis method based on the multitask network, and the traffic scene analysis method based on the multitask network can extract abundant image features, makes up for the loss of image detail information caused by down sampling in an encoder, and is beneficial to improving the segmentation and detection effects. The method provides a multi-task network structure, can realize semantic segmentation and target detection of traffic scene images through one-time back propagation, has better real-time performance and higher accuracy, but the technology can not extract the structural information of the scene, only accurately and efficiently detects the targets and connected domains in the scene, and therefore cannot support secondary development of traffic events.
Aiming at the problems that structured output cannot be carried out on a scene in the related technology, and then events under the scene cannot be accurately processed, and the like, an effective technical scheme is not provided.
Disclosure of Invention
The embodiment of the invention provides a scene establishing method and device, a storage medium and an electronic device, which are used for at least solving the problems that structured output cannot be carried out on a scene in the related technology, so that events in the scene cannot be accurately processed and the like.
The embodiment of the invention provides a scene establishing method, which comprises the following steps: acquiring a snapshot image of a target road surface area by an image acquisition device; determining scene information of the target road surface area from a segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area; and establishing a scene of the target road surface area according to the scene information.
Optionally, determining scene information of the target road surface area from a segmented composite map corresponding to the snapshot image includes: performing iterative fitting on the non-arrow type example in the segmentation synthetic graph to obtain a fitting result, wherein the fitting result comprises at least one of the following: end point information, slope information of the line; analyzing the fitting result to determine line information in the scene information, wherein the line information includes: the lane line.
Optionally, after analyzing the fitting result to determine line information in the scene information, the method further includes: performing a modification operation on the determined line information in the scene information, wherein the modification operation is at least one of the following operations: correcting the initial position of the line information, supplementing the missing line in the line information and deleting the wrong line in the line information.
Optionally, establishing a scene of the target road surface area according to the scene information includes: acquiring a road surface mark on the target road surface area; determining a plurality of area types included in the target road surface area according to the road surface mark and the corrected line information; and establishing a scene of the target road surface area according to the multiple area types.
Optionally, the line information includes: under the condition of the lane line, analyzing the fitting result to determine line information in the scene information, including: acquiring a road surface mark of the target road surface area from the fitting result; creating a sequence table according to the positions of the road surface marks and the lane lines in the snapshot image, wherein the road surface marks correspond to first values and the lane lines correspond to second values in the sequence table; and determining the lane lines in the scene information according to the sequence table.
Optionally, after the scene of the target road surface area is established according to the scene information, the method further includes: outputting the established scene of the target road surface area in a coordinate form; and judging whether the target vehicle has violation or not based on the target road surface area in the coordinate form.
Optionally, acquiring a snapshot image of the target road surface area by the image acquisition device includes: determining whether a scene of the target road surface area has been established by the image acquisition device; and under the condition of not establishing, acquiring a snapshot image of the target road surface area by the image acquisition device.
Optionally, after determining whether the scene of the target road surface area is established by the image acquisition device, the method further includes: matching the established scene of the target road surface area with the scene cached in the image acquisition device; in the event of a mismatch, saving the scene update of the established target road surface area in the image acquisition device.
According to another embodiment of the present invention, there is also provided a scene creation apparatus including: the first acquisition module is used for acquiring a snapshot image of the image acquisition device on a target road surface area; a second obtaining module, configured to determine scene information of the target road surface area from a segmented composite image corresponding to the snapshot image, where the scene information is at least used to indicate a lane line of the target road surface area; and the establishing module is used for establishing the scene of the target road surface area according to the scene information.
Optionally, the second obtaining module is further configured to perform iterative fitting on a non-arrow type example in the segmented composite map to obtain a fitting result, where the fitting result includes at least one of: end point information, slope information of the line; analyzing the fitting result to determine line information in the scene information, wherein the line information includes: the lane line.
Optionally, the second obtaining module is further configured to perform a modification operation on the line information in the determined scene information, where the modification operation is at least one of: correcting the initial position of the line information, supplementing the missing line in the line information and deleting the wrong line in the line information.
Optionally, the establishing module is further configured to obtain a road surface identifier on the target road surface area; determining a plurality of area types included in the target road surface area according to the road surface mark and the corrected line information; and establishing a scene of the target road surface area according to the multiple area types.
Optionally, the second obtaining module is further configured to obtain a road surface identifier of the target road surface area from the fitting result; creating a sequence table according to the positions of the road surface marks and the lane lines in the snapshot image, wherein the road surface marks correspond to first values and the lane lines correspond to second values in the sequence table; and determining the lane lines in the scene information according to the sequence table.
Optionally, the apparatus further comprises: the judgment module is used for outputting the established scene of the target road surface area in a coordinate mode; and judging whether the target vehicle has violation or not based on the target road surface area in the coordinate form.
Optionally, the first obtaining module is further configured to determine whether a scene of the target road surface area is established by the image capturing device; and under the condition of not establishing, acquiring a snapshot image of the target road surface area by the image acquisition device.
Optionally, the apparatus further comprises: the matching module is used for matching the established scene of the target road surface area with the scene cached in the image acquisition device; in the event of a mismatch, saving the scene update of the established target road surface area in the image acquisition device.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, a snapshot image of the image acquisition device on the target road surface area is obtained; determining scene information of the target road surface area from a segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area; the technical scheme is adopted, the problems that in the related technology, structured output cannot be carried out on the scene, and therefore events under the scene cannot be accurately processed and the like are solved, and therefore the scene information at least used for indicating the road type is determined from the snapshot image, the scene is established according to the scene information, and then the events under the scene are accurately processed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal of a scene establishment method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a scene establishment method according to an embodiment of the present invention
FIG. 3 is a schematic flow chart illustrating reconstruction of a traffic scene according to an alternative embodiment of the present invention;
FIG. 4 is a schematic flow chart of a main process of a camera configuration information parsing and evaluation module according to an alternative embodiment of the present invention;
FIG. 5 is a schematic flow chart of segment information extraction according to an alternative embodiment of the present invention;
fig. 6 is a schematic flow chart illustrating a process of modifying a scene region reconstruction module according to an alternative embodiment of the present invention;
FIG. 7 is a schematic flow diagram of a road region type matching module in accordance with an alternative embodiment of the present invention;
FIG. 8 is a schematic flow chart of lane line repair according to an alternative embodiment of the present invention;
FIG. 9 is a schematic illustration of filtering of pavement marking information in accordance with an alternative embodiment of the present invention;
FIG. 10 is a schematic illustration of filtering multiple road surface identifications for a roadway area in accordance with an alternative embodiment of the present invention;
FIG. 11 is a schematic flow chart of supplemental repair of a lane line in accordance with an alternative embodiment of the present invention;
FIG. 12 is a schematic illustration of a repair in which two lane lines exist to the left or right of the location of the lane line to be repaired in accordance with an alternative embodiment of the present invention;
FIG. 13 is a schematic illustration of the repair of an alternate embodiment of the present invention in which there are no two lane lines to the left or right of the location of the lane line to be repaired;
fig. 14 is a block diagram of a scene creation apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the operation on a computer terminal as an example, fig. 1 is a hardware structure block diagram of a computer terminal of a scene establishment method according to an embodiment of the present invention. As shown in fig. 1, the computer terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration with equivalent functionality to that shown in FIG. 1 or with more functionality than that shown in FIG. 1. The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the scene establishment method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to a computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
According to an embodiment of the present invention, a scene establishment method is provided, which is applied to the computer terminal, and fig. 2 is a flowchart of the scene establishment method according to the embodiment of the present invention, as shown in fig. 2, including:
step S202, acquiring a snapshot image of the image acquisition device on a target road surface area;
step S204, determining scene information of the target road surface area from a segmentation composite image corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area;
and step S206, establishing a scene of the target road surface area according to the scene information.
Through the steps, a snapshot image of the image acquisition device on the target road surface area is obtained; determining scene information of the target road surface area from a segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area; the technical scheme is adopted, the problems that in the related technology, structured output cannot be carried out on the scene, and therefore events under the scene cannot be accurately processed and the like are solved, and therefore the scene information at least used for indicating the road type is determined from the snapshot image, the scene is established according to the scene information, and then the events under the scene are accurately processed.
There are various ways for the step S204 to at least acquire the scene information of the target road surface area from the snapshot image, and in an optional embodiment, the following scheme may be implemented: performing iterative fitting on the non-arrow type example in the segmentation synthetic graph to obtain a fitting result, wherein the fitting result comprises at least one of the following: end point information, slope information of the line; analyzing the fitting result to determine line information in the scene information, wherein the line information includes: the lane line.
That is to say, the captured image of the target road surface area acquired by the acquisition device may be segmented to generate a segmented composite image corresponding to the captured image, the non-arrow type example in the segmented composite image is iteratively fitted, and the line information in the scene information is determined according to the endpoint information, the slope information of the line, and the like in the fitting result.
It should be noted that the line information includes: the method includes, but is not limited to, dividing a stop line in the composite map, dividing a lane line in the composite map, dividing a human row line in the composite map, and dividing a standby row line in the composite map.
For example, when the line information to be determined is a lane line, a specific processing scheme is as follows: dividing the image information into a picture according to the image information of the target road area obtained from an acquisition device or a server of a management department of the target road area, extracting line information of the divided image information, fitting a non-directional arrow type by using RANSANC to obtain a group of endpoint information describing a starting point and an end point, simultaneously recording slope and inclination angle information for subsequent judgment, further, when the line is determined to be a lane line according to the line information, removing partial false detection results of the determined lane line through the determined stop line information, sequentially arranging and recording the lane lines from left to right according to the change of the slope inclination angle, in order to enable the position of the lane line to be more accurate, correcting the lane line according to the obtained starting coordinate of the stop line, and simultaneously obtaining the road arrow information, the road identification information and the road measurement direction board information according to detection, and (5) performing leakage repairing on the side line of the lane line which is possibly missed.
Optionally, after analyzing the fitting result to determine line information in the scene information, the method further includes: performing a modification operation on the determined line information in the scene information, wherein the modification operation is at least one of the following operations: correcting the initial position of the line information, supplementing the missing line in the line information and deleting the wrong line in the line information.
In short, because the line information in the scene information determined according to the fitting result may not meet the expectation, the line information needs to be analyzed on the fitting result to perform a modification operation on the line information, for example, a start position of the line information in the scene information is modified, a missing line in the line information is supplemented, a missing line in the line information is deleted, and thus it is ensured that the line information in the scene information determined according to the modified fitting result meets the requirement for establishing the scene.
Optionally, establishing a scene of the target road surface area according to the scene information includes: acquiring a road surface mark on the target road surface area; determining a plurality of area types included in the target road surface area according to the road surface mark and the corrected line information; and establishing a scene of the target road surface area according to the multiple area types.
That is to say, in order to make the scene information of the target road surface area established according to the scene information richer, the road surface marks on the target road surface area and the line information after the road surface marks and the line information are corrected can be used for determining various area types of the scene which is included in the target road surface area and used for indicating the target road surface area, so that the established scene of the target road surface area is more real.
In one embodiment, the line information includes: under the condition of the lane line, analyzing the fitting result to determine line information in the scene information, including: acquiring a road surface mark of the target road surface area from the fitting result; creating a sequence table according to the positions of the road surface marks and the lane lines in the snapshot image, wherein the road surface marks correspond to first values and the lane lines correspond to second values in the sequence table; and determining the lane lines in the scene information according to the sequence table.
For example, after the road surface identifier of the target area is recognized, the lane line identifier may be represented as 1 and the road surface identifier may be represented as 0 according to a preset sequence creation mode, and then the lane lines are sequentially arranged from left to right according to the abscissa order to generate an array of all 1 s, and according to the position of the road surface identifier in the snapshot image, 0 s representing the road surface identifier are inserted into the array of the lane lines to obtain a sequence table formed by a series of 0 s and 1 s, and then the lane lines in the scene information may also be confirmed according to the lane line identifier in the sequence table.
Optionally, after the scene of the target road surface area is established according to the scene information, the method further includes: outputting the established scene of the target road surface area in a coordinate form; and judging whether the target vehicle breaks rules or not based on the target road surface area in the coordinate form, namely after the scene of the target road surface area is established, outputting the scene of the target road surface area in the coordinate form, and further judging whether the target vehicle of the target road surface area breaks rules or not according to the coordinates converted from the scene of the target road surface area.
Optionally, acquiring a snapshot image of the target road surface area by the image acquisition device includes: determining whether a scene of the target road surface area has been established by the image acquisition device; and under the condition of not establishing, acquiring a snapshot image of the target road surface area by the image acquisition device.
That is, when determining that the scene of the target road surface area is not established according to the configuration information of the image capturing device itself (this part of the scheme is generally operated manually on the image capturing device), the captured image of the target road surface area needs to be acquired by the image capturing device for scene construction of the target road surface area.
Optionally, after determining whether the scene of the target road surface area is established by the image acquisition device, the method further includes: matching the established scene of the target road surface area with the scene cached in the image acquisition device; in the event of a mismatch, saving the scene update of the established target road surface area in the image acquisition device.
After the scene of the target road surface area is established, the established scene is matched with the scene cached in the image acquisition device to determine whether the scene establishment meets the requirement, and when the scene of the target road surface area is not matched, the established new scene of the target road surface area is updated and stored in the image acquisition device, so that the accuracy of coordinate data is ensured when whether the target vehicle of the target road surface area breaks rules and regulations is judged according to the coordinates converted from the scene of the target road surface area.
The following explains the flow of the above-described scene establishment method with reference to several alternative embodiments, but is not limited to the technical solution of the embodiments of the present invention.
An optional embodiment of the present invention provides a method for reconstructing a traffic scene, in which a picture (equivalent to a snapshot image in an embodiment of the present invention) captured by a traffic monitoring camera at an urban road intersection in a vast proportion of traffic scene data sources is utilized, a traffic scene (equivalent to a scene in an embodiment of the present invention) is reconstructed by extracting structured information (equivalent to scene information in an embodiment of the present invention) of the picture in which scene content and traffic road height are related, and whether a vehicle in the picture is illegal to drive can be artificially determined by assistance in a back-end server according to the reconstructed scene and traffic scene information extracted after the picture is acquired.
Fig. 3 is a schematic flow chart of reconstructing a traffic scene according to an alternative embodiment of the present invention, as shown in fig. 3, including the following steps:
step 1, inputting reconstructed data of a traffic scene; the reconstructed input data source has two main approaches, one is configuration information of the front-end camera (which is equivalent to the scheme of determining whether the scene of the target road surface area is established through the image acquisition device described in the above embodiment), and the other is result information after the image is segmented and detected.
For the front-end configuration information, the scene reconstruction module can directly analyze the configuration information of the image acquisition device manually, and then outputs scene structure information according to the configuration information; for the second data source, the number of pictures captured by the camera configuration parameter monitoring varies from 2 to 6, and optionally, the images may contain 1 to 2 close-up views of the vehicle, with the remaining pictures being captured at different times of the vehicle as evidence chains.
Step 2: according to two different input data types, the processing of the scene reconstruction module is also divided into two flow paths for accurately processing the input data;
optionally, as shown in fig. 4, when configuration information is transmitted from the outside, the scene reconstruction module performs matching judgment on the input configuration information and the internal cache; if the difference of the timestamp information between the two is too large, newly-transmitted configuration information is adopted as scene cache to prevent information deviation caused by lens shift caused by too large time span, and further scene structured information is output.
It should be noted that the picture information is the most common input form, because the information of the server comes from the database of the traffic police department, and the data information is mainly the picture and does not include the pre-configuration information. Therefore, a certain processing link is required to obtain the structural information of the scene,
optionally, when picture information is transmitted from the outside, the structural information of the scene is obtained through the following processing links;
a first link: extracting segment information from the segmentation input; specifically, the picture information is processed according to the flow shown in fig. 5;
step S502: extracting line segment information from the picture information by a segmentation input; traversing all examples in the example segmentation composite graph, fitting the non-guide arrow type by using a Random Sample Consensus (RANSANC) algorithm to obtain a group of endpoint information description starting points and end points, simultaneously recording slope and dip angle information for subsequent judgment, processing stop line information, and searching and confirming the stop line in a fitting result to obtain left and right endpoint of the stop line and record.
Step S504: after the example traversal is finished, recording information of a stop line, a lane line, a sidewalk and a waiting area;
lane line information processing: and confirming the lane lines in the fitting result, simultaneously removing partial false detection results according to the information of the stop line, and sequentially arranging and recording the side lines of the traffic lane from left to right according to the change of the slope inclination angle.
And (3) sidewalk information processing: and acquiring and recording the border information of the sidewalk in the composite image.
And (3) processing information of a row line: and acquiring and recording the information of the row line to be processed in the composite image.
A second ring section: correcting, supplementing missing lines and deleting wrong lines of area boundary lines, mainly starting positions of stop lines, lane lines and to-be-driven area boundary lines according to the obtained line segment information; as shown in fig. 6, the specific steps are as follows:
step S602: acquiring the initial coordinates of the stop line, estimating the coordinates of the joint of the lane line and the stop line, and correcting and recording the position of the upper end point of the lane line which is not contacted with the stop line;
and judging lines in the scene, and reserving one line as final information for a batch of lines with a short distance according to the distance, the slope inclination angle and the length information. Meanwhile, according to the detected road surface arrow information, road surface identification information and road test indicator information, performing leakage repairing on the lane side line which is possibly missing;
step S604: correcting and recording the result of the position of the starting point of the row line to be processed according to the stop line of the row region to be processed;
step S606: supplementing missing lines in the line segment information and deleting wrong lines in the line segment information;
step S608: and synthesizing and reconstructing a lane area, and recording the number of lanes and the coordinate point information of the lane sideline.
And a third link: carrying out road type matching; traversing sequentially according to different areas, simultaneously performing area type matching by combining input combined road identification information, determining the type if the road identification, the arrow type and the area have included relation, confirming the relation according to the distance between the road detection license plate and a road detection edge line and the principle of proximity, and optionally, the road type can comprise a sidewalk, a bus lane driving area and the like.
As shown in fig. 7, the sidewalk area information, the waiting area information, the traffic light position information, and the traffic light state information are written and output in sequence, and the steps are as follows:
step S702: writing and outputting the sidewalk area information;
step S704: writing and outputting the information of the to-be-row area;
step S706: writing and outputting the traffic light position information;
step S708: and writing and outputting the traffic light state information.
And further combining road type matching to obtain the structural information of the reconstructed scene.
And step 3: outputting the structured information of the scene.
In an alternative embodiment of the present invention, the repair of the lane line is implemented in the following manner, as shown in fig. 8, specifically as follows:
step S800: and (4) counting the road marking information, traversing the structured information data generated according to the fitting result, and acquiring and recording the positions of all the guide arrows, the bus lane marks and the non-motor lane marks.
Step S802: the lane line repairing reference mark confirmation method includes that due to the fact that repair of a lane line focuses on an area below a lens, structural information may be introduced into road surface identification information and the like on the opposite side of an intersection for an intersection scene, and a lot of error interference is brought to confirmation of lane filling line segments, so that screening and filtering are needed.
Optionally, the following procedures may be provided to filter the road surface markings by an optional embodiment of the present invention;
step 8021: filtering identification information above the stop line or the sidewalk: the road surface identification information recognized in the scene may be generally divided into a lane area and a to-be-driven area, and in the subsequent line repairing process, only the identification information of the lane area needs to be used, and fig. 9 is a schematic diagram of filtering the road surface identification information according to an alternative embodiment of the present invention.
Step 8022: in the remaining identification information, there may be a plurality of identification information of one lane, it is necessary to count the coordinate positions of all the identification information, classify the remaining identification information to obtain the upper side identification information and the lower side identification information of the lane area, and select the road surface identification information measured by the lower side close to the lens as the basis for line compensation, and fig. 10 is a schematic diagram for filtering the multi-path surface identification condition of one lane area according to the alternative embodiment of the present invention.
Step S804: to facilitate understanding, the optional embodiment of the present invention uses binary arrays to sort by position, where the lane line is 1 and the identification information is 0, and the specific sequence array forming flow is:
step 8041: and (4) arranging the lane lines in turn from left to right according to the abscissa to generate an array with all 1 s.
Step 8042: the positions of the identification information and the lane line picked up in step S802 are sequentially determined and inserted into the sequence generated in step 8041 to obtain a series of sequences consisting of 0 and 1
Step S806: the insertion sequence position of the supplementary lane line is determined by the sequence table, and the prior information of the lane area where the road surface mark is located can be known, so that the sequences 0 and 1 have the following characteristics:
1) the sequence head and tail are all 1, and the left boundary and the right boundary of the representative lane area are all restricted by taking lane lines as the most boundary;
2) a 1 is arranged between 0 and 0 for separation, which means that a lane line is arranged between two pieces of road surface identification information for distinguishing two road surface identifications, and the two road surface identifications are divided into a left lane area and a right lane area;
for example, if a sequence of 0-1-0-0-1 exists, indicating that the leftmost and middle lane lines need to be filled, the final repair result should be 1-0-1-0-1-0-1.
It should be noted that the sequence array may be a binary array or other binary arrays, which is not limited in the present invention.
Step S808: supplementing and repairing the lane line;
it should be noted that the theoretical basis of lane line restoration is the scene perspective effect. Due to the perspective problem of the lens photograph, the image has two characteristics: all the targets are upwards converged to a point perspective point; the widths of different lanes under the same image height are approximately equal, namely the perspective effect in the orthogonal direction of the perspective direction is not obvious.
Fig. 11 is a schematic flow chart of lane line supplementary repair according to an alternative embodiment of the present invention, which includes:
step 902: whether two lane lines exist on the left side or the right side of the position of the lane line to be repaired is confirmed to further confirm the repairing mode;
step 904: when two lane lines are confirmed to exist, as shown in fig. 12, two heights h1 and h2 and an on-line abscissa are taken from the lane lines, two on-line abscissas with the same height are made, the two connecting lines are translated to obtain two points on a completion line, and a supplementary repaired lane line is obtained through the two-point position connecting line;
step 906: when two lane lines do not exist, as shown in fig. 13, a perspective intersection point is obtained by using the other lane lines upwards, a p2 point is obtained by connecting a point on a central axis of a lane identification information frame to a known edge line on the right side, then two coordinate points p1 and p2 with different heights on the road identification are obtained, a point p2_ new on a filled edge line is obtained by translating the connecting line to the left side by taking the central axis of the lane identification information frame as symmetry, then coordinates p1_ new and p2_ new of two points on the filled edge line are confirmed by p1, p2 and the existing edge line on one side, and an supplemented and repaired lane line is obtained by connecting the positions of the two points p1_ new and p2_ new.
Through the optional embodiment, the repair of the undetected lane lines is repaired on the repair confirmation of the clearly detected lane lines, so that the scene integrity is greatly improved, the problem that the shielding of the lane lines under the crowded road scene cannot be processed is avoided, and the judgment accuracy rate when a traffic incident occurs is improved.
Through the steps, the scene information in the picture is obtained through the information structuring processing process of the picture and is output outwards in a coordinate mode. It should be noted that, the distance for improving the effect of scene segmentation and information detection is far from the actual event judgment by using the scene, and the process of manual review cannot be avoided. In addition, the actual traffic scene is changed into a regular data form with a certain organization structure by utilizing a traffic scene reconstruction method, coordinated data can be effectively obtained through reasonable extraction, and the computer technology is utilized to comprehensively obtain various violation behaviors such as red light running, non-guide, violation lane change and the like, which need to comprehensively consider lane line information, lane area information, traffic light information and vehicle information; the optional embodiment of the invention combines the segmentation and the detection, and refines the picture information into different types of areas circled by pixel coordinate points, thereby providing great convenience for realizing the penalty strategy.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a scene establishing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted here. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 14 is a block diagram of a scene creation apparatus according to an embodiment of the present invention, and as shown in fig. 14, the apparatus includes:
the first acquisition module 80 is used for acquiring a snapshot image of the image acquisition device on the target road surface area;
a second obtaining module 82, configured to determine scene information of the target road surface area from a segmented composite image corresponding to the snapshot image, where the scene information is at least used to indicate a lane line of the target road surface area;
and the establishing module 84 is configured to establish a scene of the target road surface area according to the scene information.
Acquiring a snapshot image of the image acquisition device on a target road surface area through the device; determining scene information of the target road surface area from a segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area; the technical scheme is adopted, the problems that in the related technology, structured output cannot be carried out on the scene, and therefore events under the scene cannot be accurately processed and the like are solved, and therefore the scene information at least used for indicating the road type is determined from the snapshot image, the scene is established according to the scene information, and then the events under the scene are accurately processed.
Optionally, the second obtaining module is further configured to obtain a segmentation composite map corresponding to the snapshot image; performing iterative fitting on the non-arrow type example in the segmentation synthetic graph to obtain a fitting result, wherein the fitting result comprises at least one of the following: end point information, slope information of the line; analyzing the fitting result to determine line information in the scene information, wherein the line information includes: the lane line.
That is to say, the captured image of the target road surface area acquired by the acquisition device may be segmented to generate a segmented composite image corresponding to the captured image, the non-arrow type example in the segmented composite image is iteratively fitted, and the line information in the scene information is determined according to the endpoint information, the slope information of the line, and the like in the fitting result.
It should be noted that the line information includes: the method includes, but is not limited to, dividing a stop line in the composite map, dividing a lane line in the composite map, dividing a human row line in the composite map, and dividing a standby row line in the composite map.
For example, when the line information to be determined is a lane line, a specific processing scheme is as follows: dividing the image information into a picture according to the image information of the target road area obtained from an acquisition device or a server of a management department of the target road area, extracting line information of the divided image information, fitting a non-directional arrow type by using RANSANC to obtain a group of endpoint information describing a starting point and an end point, simultaneously recording slope and inclination angle information for subsequent judgment, further, when the line is determined to be a lane line according to the line information, removing partial false detection results of the determined lane line through the determined stop line information, sequentially arranging and recording the lane lines from left to right according to the change of the slope inclination angle, in order to enable the position of the lane line to be more accurate, correcting the lane line according to the obtained starting coordinate of the stop line, and simultaneously obtaining the road arrow information, the road identification information and the road measurement direction board information according to detection, and (5) performing leakage repairing on the side line of the lane line which is possibly missed.
Optionally, the second obtaining module is further configured to perform a modification operation on the line information in the determined scene information, where the modification operation is at least one of: correcting the initial position of the line information, supplementing the missing line in the line information and deleting the wrong line in the line information.
In short, because the line information in the scene information determined according to the fitting result may not meet the expectation, the line information needs to be analyzed on the fitting result to perform a modification operation on the line information, for example, a start position of the line information in the scene information is modified, a missing line in the line information is supplemented, a missing line in the line information is deleted, and thus it is ensured that the line information in the scene information determined according to the modified fitting result meets the requirement for establishing the scene.
Optionally, the establishing module is further configured to obtain a road surface identifier on the target road surface area; determining a plurality of area types included in the target road surface area according to the road surface mark and the corrected line information; and establishing a scene of the target road surface area according to the multiple area types.
Optionally, the second obtaining module is further configured to obtain a road surface identifier of the target road surface area from the fitting result; creating a sequence table according to the positions of the road surface marks and the lane lines in the snapshot image, wherein the road surface marks correspond to first values and the lane lines correspond to second values in the sequence table; and determining the lane lines in the scene information according to the sequence table.
For example, after the road surface identifier of the target area is recognized, the lane line identifier may be represented as 1 and the road surface identifier may be represented as 0 according to a preset sequence creation mode, and then the lane lines are sequentially arranged from left to right according to the abscissa order to generate an array of all 1 s, and according to the position of the road surface identifier in the snapshot image, 0 s representing the road surface identifier are inserted into the array of the lane lines to obtain a sequence table formed by a series of 0 s and 1 s, and then the lane lines in the scene information may also be confirmed according to the lane line identifier in the sequence table.
That is to say, in order to make the scene information of the target road surface area established according to the scene information richer, the road surface marks on the target road surface area and the line information after the road surface marks and the line information are corrected can be used for determining various area types of the scene which is included in the target road surface area and used for indicating the target road surface area, so that the established scene of the target road surface area is more real.
Optionally, the apparatus further comprises: the judgment module is used for outputting the established scene of the target road surface area in a coordinate mode; and judging whether the target vehicle breaks rules or not based on the target road surface area in the coordinate form, namely after the scene of the target road surface area is established, outputting the scene of the target road surface area in the coordinate form, and further judging whether the target vehicle of the target road surface area breaks rules or not according to the coordinates converted from the scene of the target road surface area.
Optionally, the first obtaining module is further configured to determine whether a scene of the target road surface area is established by the image capturing device; and under the condition of not establishing, acquiring a snapshot image of the target road surface area by the image acquisition device.
That is, when determining that the scene of the target road surface area is not established according to the configuration information of the image capturing device itself (this part of the scheme is generally operated manually on the image capturing device), the captured image of the target road surface area needs to be acquired by the image capturing device for scene construction of the target road surface area.
Optionally, the apparatus further comprises: the matching module is used for matching the established scene of the target road surface area with the scene cached in the image acquisition device; in the event of a mismatch, saving the scene update of the established target road surface area in the image acquisition device.
After the scene of the target road surface area is established, the established scene is matched with the scene cached in the image acquisition device to determine whether the scene establishment meets the requirement, and when the scene of the target road surface area is not matched, the established new scene of the target road surface area is updated and stored in the image acquisition device, so that the accuracy of coordinate data is ensured when whether the target vehicle of the target road surface area breaks rules and regulations is judged according to the coordinates converted from the scene of the target road surface area.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a snapshot image of the image acquisition device on the target road surface area;
s2, determining scene information of the target road surface area from the segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area;
and S3, establishing the scene of the target road surface area according to the scene information.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a snapshot image of the image acquisition device on the target road surface area;
s2, determining scene information of the target road surface area from the segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area;
and S3, establishing the scene of the target road surface area according to the scene information.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method for scene creation, comprising:
acquiring a snapshot image of a target road surface area by an image acquisition device;
determining scene information of the target road surface area from a segmentation composite map corresponding to the snapshot image, wherein the scene information is at least used for indicating a lane line of the target road surface area;
and establishing a scene of the target road surface area according to the scene information.
2. The method of claim 1, wherein determining scene information of the target road surface area from the segmented composite map corresponding to the snap-shot image comprises:
performing iterative fitting on the non-arrow type example in the segmentation synthetic graph to obtain a fitting result, wherein the fitting result comprises at least one of the following: end point information, slope information of the line;
analyzing the fitting result to determine line information in the scene information, wherein the line information includes: the lane line.
3. The method of claim 2, wherein after analyzing the fitting results to determine line information in the scene information, the method further comprises:
performing a modification operation on the determined line information in the scene information, wherein the modification operation is at least one of the following operations: correcting the initial position of the line information, supplementing the missing line in the line information and deleting the wrong line in the line information.
4. The method of claim 3, wherein creating the scene of the target road surface area based on the scene information comprises:
acquiring a road surface mark on the target road surface area;
determining a plurality of area types included in the target road surface area according to the road surface mark and the corrected line information;
and establishing a scene of the target road surface area according to the multiple area types.
5. The method of claim 2, wherein the line information comprises: under the condition of the lane line, analyzing the fitting result to determine line information in the scene information, including:
acquiring a road surface mark of the target road surface area from the fitting result;
creating a sequence table according to the positions of the road surface marks and the lane lines in the snapshot image, wherein the road surface marks correspond to first values and the lane lines correspond to second values in the sequence table;
and determining the lane lines in the scene information according to the sequence table.
6. The method of claim 1, wherein after establishing the scene of the target road surface area from the scene information, the method further comprises:
outputting the established scene of the target road surface area in a coordinate form;
and judging whether the target vehicle has violation or not based on the target road surface area in the coordinate form.
7. The method of claim 1, wherein obtaining a snapshot of the target pavement area from the image capture device comprises:
determining whether a scene of the target road surface area has been established by the image acquisition device;
and under the condition of not establishing, acquiring a snapshot image of the target road surface area by the image acquisition device.
8. The method of claim 7, wherein after determining whether the scene of the target road surface region has been established by the image capture device, the method further comprises:
matching the established scene of the target road surface area with the scene cached in the image acquisition device;
in the event of a mismatch, saving the scene update of the established target road surface area in the image acquisition device.
9. A scene creation apparatus, comprising:
the first acquisition module is used for acquiring a snapshot image of the image acquisition device on a target road surface area;
a second obtaining module, configured to determine scene information of the target road surface area from a segmented composite image corresponding to the snapshot image, where the scene information is at least used to indicate a lane line of the target road surface area;
and the establishing module is used for establishing the scene of the target road surface area according to the scene information.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 8 when executed.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN202010956270.5A 2020-09-11 2020-09-11 Scene establishing method and device, storage medium and electronic device Pending CN112183244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010956270.5A CN112183244A (en) 2020-09-11 2020-09-11 Scene establishing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010956270.5A CN112183244A (en) 2020-09-11 2020-09-11 Scene establishing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112183244A true CN112183244A (en) 2021-01-05

Family

ID=73920662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010956270.5A Pending CN112183244A (en) 2020-09-11 2020-09-11 Scene establishing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112183244A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470008A (en) * 2021-07-26 2021-10-01 南通市江海公路工程有限公司 Method and system for intelligently monitoring construction quality of asphalt pavement
CN114038197A (en) * 2021-11-24 2022-02-11 浙江大华技术股份有限公司 Scene state determination method and device, storage medium and electronic device
CN117014247A (en) * 2023-08-28 2023-11-07 广东金朋科技有限公司 Scene generation method, system and storage medium based on state learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136789A (en) * 2011-11-28 2013-06-05 同济大学 Traffic accident road base map information processing method based on topographic map and image
CN105528588A (en) * 2015-12-31 2016-04-27 百度在线网络技术(北京)有限公司 Lane line recognition method and device
CN108230437A (en) * 2017-12-15 2018-06-29 深圳市商汤科技有限公司 Scene reconstruction method and device, electronic equipment, program and medium
CN110516610A (en) * 2019-08-28 2019-11-29 上海眼控科技股份有限公司 A kind of method and apparatus for road feature extraction
CN110889388A (en) * 2019-12-03 2020-03-17 上海眼控科技股份有限公司 Violation identification method, device, equipment and storage medium
CN111126323A (en) * 2019-12-26 2020-05-08 广东星舆科技有限公司 Bayonet element recognition and analysis method and system serving for traffic violation detection
CN111311710A (en) * 2020-03-20 2020-06-19 北京四维图新科技股份有限公司 High-precision map manufacturing method and device, electronic equipment and storage medium
CN111428538A (en) * 2019-01-09 2020-07-17 阿里巴巴集团控股有限公司 Lane line extraction method, device and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136789A (en) * 2011-11-28 2013-06-05 同济大学 Traffic accident road base map information processing method based on topographic map and image
CN105528588A (en) * 2015-12-31 2016-04-27 百度在线网络技术(北京)有限公司 Lane line recognition method and device
CN108230437A (en) * 2017-12-15 2018-06-29 深圳市商汤科技有限公司 Scene reconstruction method and device, electronic equipment, program and medium
CN111428538A (en) * 2019-01-09 2020-07-17 阿里巴巴集团控股有限公司 Lane line extraction method, device and equipment
CN110516610A (en) * 2019-08-28 2019-11-29 上海眼控科技股份有限公司 A kind of method and apparatus for road feature extraction
CN110889388A (en) * 2019-12-03 2020-03-17 上海眼控科技股份有限公司 Violation identification method, device, equipment and storage medium
CN111126323A (en) * 2019-12-26 2020-05-08 广东星舆科技有限公司 Bayonet element recognition and analysis method and system serving for traffic violation detection
CN111311710A (en) * 2020-03-20 2020-06-19 北京四维图新科技股份有限公司 High-precision map manufacturing method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470008A (en) * 2021-07-26 2021-10-01 南通市江海公路工程有限公司 Method and system for intelligently monitoring construction quality of asphalt pavement
CN113470008B (en) * 2021-07-26 2023-08-18 南通市江海公路工程有限公司 Method and system for intelligently monitoring construction quality of asphalt pavement
CN114038197A (en) * 2021-11-24 2022-02-11 浙江大华技术股份有限公司 Scene state determination method and device, storage medium and electronic device
CN117014247A (en) * 2023-08-28 2023-11-07 广东金朋科技有限公司 Scene generation method, system and storage medium based on state learning

Similar Documents

Publication Publication Date Title
CN112183244A (en) Scene establishing method and device, storage medium and electronic device
CN112069856A (en) Map generation method, driving control method, device, electronic equipment and system
CN110969719A (en) Automatic inspection method, system, terminal equipment and storage medium
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN112465970A (en) Navigation map construction method, device, system, electronic device and storage medium
CN113252053A (en) High-precision map generation method and device and electronic equipment
CN111958595B (en) Multi-sensor asynchronous information fusion system and method for transformer substation inspection robot
CN115984486A (en) Method and device for generating bridge model fusing laser radar and depth camera
CN112799430B (en) Programmable unmanned aerial vehicle-based road surface image intelligent acquisition method
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN113850837B (en) Video processing method and device, electronic equipment, storage medium and computer product
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
CN111860040A (en) Station signal equipment state acquisition method and device and computer equipment
CN113554610A (en) Photovoltaic module operation state detection method and application device thereof
CN113643405A (en) Marking method, examining method, system and equipment for graph-model linkage
CN109145424B (en) Bridge data identification method and system for ground penetrating radar data
CN112700653A (en) Method, device and equipment for judging illegal lane change of vehicle and storage medium
CN115223030B (en) Pavement disease detection system and method
CN113536860B (en) Key frame extraction method, and vectorization method of road traffic equipment and facilities
CN117274817B (en) Automatic crack identification method and device, terminal equipment and storage medium
CN117710592A (en) Map topology detection system, method, electronic equipment and storage medium
KR100642870B1 (en) Method for updating road facility database using computer vision
CN117496203A (en) Object matching method and device, storage medium and electronic device
CN114267022A (en) Object abnormality detection method and device, storage medium, and electronic device
CN116310159A (en) Automatic driving scene extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination