CN110516610A - A kind of method and apparatus for road feature extraction - Google Patents

A kind of method and apparatus for road feature extraction Download PDF

Info

Publication number
CN110516610A
CN110516610A CN201910803773.6A CN201910803773A CN110516610A CN 110516610 A CN110516610 A CN 110516610A CN 201910803773 A CN201910803773 A CN 201910803773A CN 110516610 A CN110516610 A CN 110516610A
Authority
CN
China
Prior art keywords
information
target
scene
image
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910803773.6A
Other languages
Chinese (zh)
Inventor
周康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201910803773.6A priority Critical patent/CN110516610A/en
Publication of CN110516610A publication Critical patent/CN110516610A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Abstract

The purpose of the application is to provide a kind of method and apparatus for road feature extraction, and the application is by obtaining the multiframe scene image of target road in different time period;The scene cut model for obtaining different structure obtains multiple groups target information, wherein the target information includes moving target information by the segmentation information of all scene images of scene cut model iterative extraction of each structure;Moving target information in the multiple groups target information is filtered, the static object information of each group is obtained;Fusion Features are carried out using static object information of the Nearest Neighbor with Weighted Voting mode to each group of same location of pixels, obtain fused characteristic image;Link characteristic information is extracted from the fused characteristic image.The advantage that different structure model can be maximally utilized obtains more complete link characteristic information, is not blocked, the interference of weather, the factors such as time, the characteristic information extracted is more accurate, strong applicability.

Description

A kind of method and apparatus for road feature extraction
Technical field
This application involves intelligent transportation algorithm field more particularly to a kind of method and apparatus for road feature extraction.
Background technique
How effectively to extract the traffic information on road, as stop line, leading line, lane line, double amber lines, zebra stripes, Then vehicle, pedestrian etc. carry out subsequent act of violating regulations and judge that be one has the problem of challenge very much.At present for road spy The extraction of reference breath, uses the road information extracting mode of single-frame images, however the contingency of which is too big, easily by To the interference of many factors, such as the graticule on occlusion road, unexpected strong light interference, night, algorithm itself effect, These poor conditions will cause the poor situation of the effect extracted.
Summary of the invention
The purpose of the application is to provide a kind of method and apparatus for road feature extraction, solves in the prior art The effect of road feature extraction is poor, problem easily affected by environment.
According to the one aspect of the application, a kind of method for road feature extraction is provided, this method comprises:
Obtain the multiframe scene image of target road in different time period;
The scene cut model for obtaining different structure, passes through all scenes of scene cut model iterative extraction of each structure The segmentation information of image obtains multiple groups target information, wherein the target information includes moving target information;
Moving target information in the multiple groups target information is filtered, the static object information of each group is obtained;
Fusion Features are carried out using static object information of the Nearest Neighbor with Weighted Voting mode to each group of same location of pixels, are merged Characteristic image afterwards;
Link characteristic information is extracted from the fused characteristic image.
Further, after extracting link characteristic information in the fused characteristic image, comprising:
The link characteristic information and the moving target information are stored as structured document;
The target object for act of violating regulations occur is determined according to the structured file and preset road rule of conduct.
Further, the scene cut model for obtaining different structure is mentioned by the scene cut model iteration of each structure The segmentation information for taking all scene images obtains multiple groups target information, comprising:
Obtain scene cut model list, wherein the scene cut model list includes the scene of multiple and different structures Parted pattern;
Preset iterative extraction is executed using first model in the scene cut model list to operate, and obtains first group Target information;
Remaining model executes the preset iteration that first model executes and mentions in scene cut model list described in iteration Extract operation obtains multiple groups target information.
Further, the preset iterative extraction, which operates, includes:
Information extraction is split to scene image in target time section, obtains segmentation information, wherein the segmentation information Target category information including scene image in the target time section;
According to the corresponding two-value ash of the target category information architecture of scene image scene image in the target time section Spend characteristic pattern;
Extraction is merged to the target information region in the two-value gray feature figure.
Further, corresponding according to the target category information architecture of scene image scene image in the target time section Two-value gray feature figure, comprising:
The building mask images high with width with scene image in the target time section;
It is the mask artwork according to the target category information of scene image in the target time section and default assignment condition Pixel as in carries out assignment, obtains gray feature figure;
Obtained according to the gray feature figure and C target category include C-1 target category two-value gray feature figure, Wherein, C is positive integer.
Further, the default assignment condition meets following formula:
Wherein, M (x, y) indicates pixel value of the position for pixel at (x, y), the C expression object time in mask images The sum of the target category of scene image in section, NN (x, y) indicate that the position of scene image in the target time section is (x, y) The classification value that the pixel at place obtains after Algorithm of Scene.
Further, extraction is merged to the target information region in the two-value gray feature figure, comprising:
Judge the pixel value of the pixel value of pixel and all pixels preset in periphery in the pixel in the mask images It is whether equal, if so, merging the adjacent and equal value of pixel value, with the connected domain of the determination mask images;
The connected domain in target information region is extracted according to the connected domain of the two-value gray feature figure and the mask images.
Further, the Nearest Neighbor with Weighted Voting mode meets the following conditions:
Wherein, m indicates the quantity of scene cut model, and n indicates the frame number of the scene image under target road, M ' (x, y) table Show the fusion at the coordinate position (x, y) using the target road image obtained after m scene cut model, the fusion of n frame scene image Pixel value, C are the class number in scene image, Mkj(x, y) indicates target road in k-th of scene cut model jth frame scene figure As the feature class label in coordinate position (x, y), γ indicates the weight of class number,Table Show that m scene cut model n frame scene image is classified as the numbers matrix of C at coordinate (x, y).
On the other hand according to the application, a kind of equipment for road feature extraction is additionally provided, which includes:
One or more processors;And
It is stored with the memory of computer-readable instruction, the computer-readable instruction makes the processor when executed Execute the operation such as aforementioned the method.
According to the application another aspect, a kind of computer-readable medium is additionally provided, is stored thereon with computer-readable Instruction, the computer-readable instruction can be executed by processor to realize the method as described in aforementioned.
Compared with prior art, the application is by obtaining the multiframe scene image of target road in different time period;It obtains The scene cut model for taking different structure passes through the segmentation of all scene images of scene cut model iterative extraction of each structure Information obtains multiple groups target information, wherein the target information includes moving target information;It will be in the multiple groups target information Moving target information be filtered, obtain the static object information of each group;Using Nearest Neighbor with Weighted Voting mode to same location of pixels The static object information of each group carries out Fusion Features, obtains fused characteristic image;From the fused characteristic image Extract link characteristic information.The advantage that different structure model can be maximally utilized obtains more complete link characteristic information, It is not blocked, the interference of weather, the factors such as time, the characteristic information extracted is more accurate, strong applicability.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 shows a kind of process signal of the method for road feature extraction provided according to the one aspect of the application Figure;
Fig. 2 shows the effect signals that usage scenario parted pattern in one embodiment of the application extracts crossing characteristic information Figure;
Fig. 3 shows the schematic diagram for the scene picture captured under different condition under the same crossing in one embodiment of the application;
Fig. 4 shows the effect that multiple model extraction target informations are used the multiframe picture in Fig. 3 in one embodiment of the application Fruit schematic diagram;
Fig. 5 is shown in one embodiment of the application to the fused effect diagram of the target information extracted in Fig. 4;
Fig. 6 shows the road information in one embodiment of the application at night and extracts result schematic diagram;
Fig. 7 shows the schematic diagram of the extracting method of the scene characteristic information at crossing in one specific embodiment of the application.
The same or similar appended drawing reference represents the same or similar component in attached drawing.
Specific embodiment
The application is described in further detail with reference to the accompanying drawing.
In a typical configuration of this application, terminal, the equipment of service network and trusted party include one or more Processor (such as central processing unit (Central Processing Unit, CPU)), input/output interface, network interface and Memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (Random Access Memory, RAM) and/or the forms such as Nonvolatile memory, such as read-only memory (Read Only Memory, ROM) Or flash memory (flash RAM).Memory is the example of computer-readable medium.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (Phase-Change RAM, PRAM), static random is deposited Access to memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable can It is program read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), fast Dodge memory body or other memory techniques, read-only disc read only memory (CD-ROM) (Compact Disc Read-Only Memory, CD- ROM), digital versatile disc (Digital Versatile Disk, DVD) or other optical storage, magnetic cassettes, tape Disk storage or other magnetic storage devices or any other non-transmission medium, can be used for storing can be accessed by a computing device Information.As defined in this article, computer-readable medium does not include non-temporary computer readable media (transitory Media), such as the data-signal and carrier wave of modulation.
Fig. 1 shows a kind of process signal of the method for road feature extraction provided according to the one aspect of the application Figure, this method comprises: step S11~step S15,
In step s 11, the multiframe scene image of target road in different time period is obtained;Here, target road is Road, the crossing for needing to extract scene information obtain the scene picture captured in the target road obstructed period, from And the self study of scene information can be carried out based on same crossing multiframe picture, a variety of environment, different time sections time delay, it extracts Crossing characteristic information more accurately and completely.
In step s 12, the scene cut model for obtaining different structure passes through the scene cut model iteration of each structure The segmentation information for extracting all scene images obtains multiple groups target information, wherein the target information includes mobile target letter Breath;Here, carrying out scene cut to the multiframe scene image in the different time sections got, the different knots built are obtained The scene cut model of structure is iterated the segmentation information extracted in scene image using the scene cut model of each structure, To obtain more accurate segmentation information, the target information in scene image is extracted according to segmentation information, wherein target letter Breath is the relevant information of vehicle, pedestrian, zebra stripes, leading line etc., the position where such as location information, such as leading line;Target Moving target information in information is the information of the moveable object such as vehicle, pedestrian.The scene cut model of each structure mentions The segmentation information for taking all scene images to obtain is one group, is iterated and is mentioned using the scene cut model of multiple and different structures It takes, obtains corresponding multiple groups segmentation information, so that multiple groups target information is extracted from multiple groups segmentation information, so as to subsequent to this Multiple groups target information is merged.
In step s 13, the moving target information in the multiple groups target information is filtered, obtains the static state of each group Target information;Here, the scene cut model through excessive structure is split the multiple groups to extract to all scene images Target information is filtered processing, filters out moving target information therein, filters out the laggard walking along the street message of moving target information The self study of breath.To in step S14, using Nearest Neighbor with Weighted Voting mode to the static object information of each group of same location of pixels into Row Fusion Features obtain fused characteristic image;Then, it in step S15, is extracted from the fused characteristic image Link characteristic information out.Here, remaining static object information is learnt by oneself after being filtered to moving target information It practises, i.e. progress Fusion Features, link characteristic information is extracted from fused result, uses Nearest Neighbor with Weighted Voting side in this application Formula carries out the Fusion Features of the static object information of multiple groups, extracts more complete leading line, zebra stripes etc. based on fusion results Target information and region can maximally utilize the advantage of different structure model by the above method, while obtain preferable vehicle , lane line characteristic information.
It, can after extracting link characteristic information in the fused characteristic image in one embodiment of the application The link characteristic information and the moving target information are stored as structured document;According to the structured file and preset Road rule of conduct determine the target object of act of violating regulations occur.Here, the scene cut model using more structures obtains mesh Mark information, wherein target information includes moving target information, is stored to the target information, by the shifting in the target information Static object information is carried out Fusion Features after being filtered by moving-target information, to extract corresponding link characteristic information, Will before filtering the moving target information that stores and the link characteristic information extracted after fusion switch to structured document into Row storage, wherein structured document can be the file of Json format.To be broken rules and regulations row according to the structured document of storage to upper layer Target object violating the regulations is judged in the judgement that act of violating regulations is carried out for judgment module, and such as the behavior of making a dash across the red light of vehicle, pedestrian goes across the road These target objects such as walking not on zebra stripes.
In one embodiment of the application, in step s 11, scene cut model list is obtained, wherein the scene cut Model list includes the scene cut model of multiple and different structures;It is held using first model in the scene cut model list The preset iterative extraction operation of row, obtains first group of target information;Remaining model is held in scene cut model list described in iteration The preset iterative extraction operation that row first model executes, obtains multiple groups target information.Here, obtaining the field of different structure The list of scape parted pattern, such as include pspnet18, pspnet50, pspnet156 parted pattern in list, wherein citing The number of digital representation feature extraction convolutional layer in parted pattern, scene cut model list are pre-configured with, first with the column First model in table is iterated extraction operation, that is, first carries out the split sence extraction operation of scene image, remaining mould Type repeats the operation of first model execution, eventually by the scene cut model of multiple and different structures to all scene figures The iterative extraction of picture operates, and obtains multiple groups target information.Specifically, first mould that the scene cut model of each structure executes The preset iterative extraction operation that type executes is following process:
Information extraction is split to scene image in target time section, obtains segmentation information, wherein the segmentation information Target category information including scene image in the target time section;
According to the corresponding two-value ash of the target category information architecture of scene image scene image in the target time section Spend characteristic pattern;
Extraction is merged to the target information region in the two-value gray feature figure.
Here, the step of when model extraction target information single using single frames, is as follows: step A), usage scenario partitioning algorithm Carry out scene information extraction, it is assumed that crossing scene image be I, width w, a height of h, then
I (x, y), x ∈ { C, 1,2 ..., w-1 }, y ∈ { 0,1,2 ... ..., h-1 } formula (1)
It indicates pixel value of the scene image I at coordinate (x, y), information extraction is split to scene image I, is extracted C class segmentation information, to be that scene image I constructs corresponding mask images M according to C class segmentation information, to determine two-value ash Characteristic pattern is spent, and then extraction is merged to the target information region in two-value grayscale image, the target information needed.Specifically Ground, in the corresponding two-value gray feature figure of building scene image, building is with scene image in the target time section with wide height Mask images;It is the mask according to the target category information of scene image in the target time section and default assignment condition Pixel in image carries out assignment, obtains gray feature figure;It obtains including C- according to the gray feature figure and C target category The two-value gray feature figure of 1 target category, wherein C is positive integer.Wherein, the default assignment condition meets following formula:
Wherein, M (x, y) indicates pixel value of the position for pixel at (x, y), the C expression object time in mask images The sum of the target category of scene image in section, NN (x, y) indicate that the position of scene image in the target time section is (x, y) The classification value that the pixel at place obtains after Algorithm of Scene.
Here, step B), mask images M is created, M wide is high identical as original scene image I, and all pixels point initializes It is 0.Information extraction is split to scene image I, obtains C class segmentation information, assignment is carried out to mask images M using formula (2), The value of each pixel is its classification C-1 in mask images, indicates the classification that the pixel belongs to, for example above-mentioned formula of assignment formula (2), as a result as shown in the grayscale image in Fig. 2 at B, wherein the mask obtained for grayscale image according to default matrix at A signal reflects Figure is penetrated, color map can be obtained according to pre-set color matrix herein.Then, according to obtained gray feature figure M and C class The gray feature figure of C-1 two-values is not obtained, wherein C-1 is the classification removed after background classification, Mi, i ∈ { 1,2 ..., C- 1 } the binary feature figure of i-th of classification, such as M are indicated1Indicate stop line, M2Indicate zebra stripes, M3Indicate left-hand rotation leading line ..., M18Indicate car etc..
Then, step C), extraction is merged to the target information region in the two-value gray feature figure, specifically: Judge whether the pixel value of pixel and the pixel value for all pixels preset in periphery in the pixel are equal in the mask images, If so, merging the adjacent and equal value of pixel value, with the connected domain of the determination mask images;It is special according to the two-value gray scale The connected domain of sign figure and the mask images extracts the connected domain in target information region.Here, setting M (x, y) indicates mask images M The value of pixel in the pixel value (i.e. C-1) of coordinate position, the default periphery of M (x, y) is such as M (x-1, y), M (x, y- 1), M (x-1, y-1), M (x+1, y), M (x, y+1), M (x+1, y+1) can then be carried out when these pixel values and M (x, y) are equal Connected domain is extracted, and is merged the adjacent and equal value of pixel value, is formed profile connected domain, and then according to the binary feature figure extracted MiExtract the connected domain of respective objects information, the target information needed, as shown in Fig. 2, extracting left-hand rotation leading line, target The target areas such as vehicle, zebra stripes.
In one embodiment of the application, when n frame field of the split sence model using m different structure under same crossing When scape extracting target from images information, need successively to execute above-mentioned steps A), step B) and step C), obtain characteristic information Mkj, k ∈ { 1,2 ..., m }, j ∈ { 1,2 ..., n }, then Mkj(x, y) indicates crossing in k-th of parted pattern jth frame scene image Ikj In the feature classification of coordinate position (x, y).It is then final using the crossing obtained after m parted pattern, the fusion of n frame scene image M ' (x, y), indicate all IkjIn the fusion pixel values of pixel coordinate position.The method of multi-model multiframe picture fusion uses The mode of different characteristic number Nearest Neighbor with Weighted Voting, the Nearest Neighbor with Weighted Voting mode meet the following conditions:
Wherein, m indicates the quantity of scene cut model, and n indicates the frame number of the scene image under target road, M ' (x, y) table Show melting at the coordinate position (x, y) using the target road image obtained after m scene cut model, the fusion of n frame scene image Pixel value is closed, C is the class number in scene image, Mkj(x, y) indicates target road in k-th of scene cut model jth frame scene Feature class label of the image in coordinate position (x, y), the weight of γ expression class number,Table Show that m scene cut model n frame scene image is classified as the numbers matrix of C at coordinate (x, y).In implementation described herein In example, the weight of class number is divided into 3 kinds of weights, and background weight is minimum, and mobile target weight takes second place, and static object weight is most It is high.The most pixel of quantity in numbers matrix is obtained finally by argmax function, then is its final pixel classification.
Mesh is extracted in the multiframe scene image under same crossing according to using the split sence model of multiple and different structures On the basis of the method for marking information, the step B in above-described embodiment is successively executed) and C), obtain extracting based on fusion feature figure The target informations such as the left-hand rotation leading line, target vehicle, zebra stripes, the pedestrian that arrive and region, effect diagram is as shown in Fig. 3, Wherein, Fig. 3 be different time sections under the same crossing, different circumstance of occlusion, different illumination candid photograph scene picture schematic diagram, Fig. 4 is the schematic diagram for removing the extraction effect after mobile target on multiple models to the multiframe picture in Fig. 3, and Fig. 5 is to Fig. 4 The extracted fused effect diagram of link characteristic information, Fig. 6 are that the road information in the case of the crossing night extracts result Schematic diagram.By Fig. 5 and Fig. 6 it is found that compare single frames extraction effect, the pixel accuracy rate extracted when using multiframe it is higher and Not blocked influences, and after link characteristic information with daytime, preferable road information can be obtained at night and is extracted As a result, and not by time, weather, block and influence.
In one specific embodiment of the application, as shown in fig. 7, S1: the evidence picture captured in different time sections is obtained, S2: the scene cut model list of the different structure used is obtained, S3: uses all evidence pictures of first model iterative extraction Segmentation information and preservation;Then, S4: all parted patterns of iteration execute the identical operation of previous step and save segmentation information, S5: the mobile target in previously stored segmentation information, such as vehicle, pedestrian are filtered out;S6: using Nearest Neighbor with Weighted Voting mode to same The segmentation result of location of pixels carries out Fusion Features, and the characteristic information of previously stored three-dimensional is converted to two dimensional character information, S7: the road informations such as the lane line, stop line, leading line of needs are extracted from fused two dimensional character figure and save as structure Change file, S8: return structure file carries out the judgement of act of violating regulations to upper layer act of violating regulations judgment module.Wherein, segmentation letter Breath is the location information of each object in picture, including stops line position information, zebra line position information, pedestrian, vehicle, red green The location informations such as lamp, the evidence picture captured are the scene picture at the crossing, carry out scene cut to the scene picture, such as right Picture A carries out scene cut, and picture A obtains with original image image B of a size, image B being color image by dividing network, Different color regions represents different meanings in color image, such as red expression solid line, yellow indicate stop line etc., different face Color constitutes segmentation information.Three-dimensional feature information is obtained after being split scene image to believe to get to W*H*N three-dimensional feature Breath, the three-dimensional feature information indicate to obtain N=m* after over-segmentation using m parted pattern, n frame image under the same crossing N gray feature figure M, N number of characteristic pattern are a three-dimensional matrice W*H*N;The mode that three-dimensional feature information is weighted ballot makes Obtaining N number of W*H matrix and merge becomes 1 W*H matrix, becomes two-dimensional matrix, that is, has switched to two dimensional character hum pattern.Step Filtering in S5 is to filter out the mobile target in segmentation information, includes C classification, each picture in such as M grayscale image of acquisition The value of vegetarian refreshments is 0~C-1, and each value represents a classification, and filtering out mobile target is that will move the class label correspondence of target Pixel value be assigned a value of 0, for example, segmentation network in define vehicle tag (classification) be 3, then in M grayscale image, by gray value Become 0 for 3 whole.After mobile target is filtered, fusion extraction is carried out to static object, i.e. static object is learnt by oneself Habit process.By above-mentioned characteristic extraction procedure, can learn at different conditions to more complete road scene information, no By block, time, the factors such as weather are influenced, the scene information extracted is more accurate.
In addition, it is stored thereon with computer-readable instruction the embodiment of the present application also provides a kind of computer-readable medium, The computer-readable instruction can be executed by processor to realize a kind of aforementioned method for road feature extraction.
In one embodiment of the application, a kind of equipment for road feature extraction is additionally provided, the equipment includes:
One or more processors;And
It is stored with the memory of computer-readable instruction, the computer-readable instruction makes the processor when executed Execute the operation such as aforementioned the method.
For example, computer-readable instruction makes one or more of processors when executed:
Obtain the multiframe scene image of target road in different time period;
The scene cut model for obtaining different structure, passes through all scenes of scene cut model iterative extraction of each structure The segmentation information of image obtains multiple groups target information, wherein the target information includes moving target information;
Moving target information in the multiple groups target information is filtered, the static object information of each group is obtained;
Fusion Features are carried out using static object information of the Nearest Neighbor with Weighted Voting mode to each group of same location of pixels, are merged Characteristic image afterwards;
Link characteristic information is extracted from the fused characteristic image.
Obviously, those skilled in the art can carry out various modification and variations without departing from the essence of the application to the application Mind and range.In this way, if these modifications and variations of the application belong to the range of the claim of this application and its equivalent technologies Within, then the application is also intended to include these modifications and variations.
It should be noted that the application can be carried out in the assembly of software and/or software and hardware, for example, can adopt With specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment In, the software program of the application can be executed to implement the above steps or functions by processor.Similarly, the application Software program (including relevant data structure) can be stored in computer readable recording medium, for example, RAM memory, Magnetic or optical driver or floppy disc and similar devices.In addition, hardware can be used to realize in some steps or function of the application, example Such as, as the circuit cooperated with processor thereby executing each step or function.
In addition, a part of the application can be applied to computer program product, such as computer program instructions, when its quilt When computer executes, by the operation of the computer, it can call or provide according to the present processes and/or technical solution. And the program instruction of the present processes is called, it is possibly stored in fixed or moveable recording medium, and/or pass through Broadcast or the data flow in other signal-bearing mediums and transmitted, and/or be stored according to described program instruction operation In the working storage of computer equipment.Here, including a device according to one embodiment of the application, which includes using Memory in storage computer program instructions and processor for executing program instructions, wherein when the computer program refers to When enabling by processor execution, method and/or skill of the device operation based on aforementioned multiple embodiments according to the application are triggered Art scheme.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned exemplary embodiment, Er Qie In the case where without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and scope of the present application is by appended power Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims Variation is included in the application.Any reference signs in the claims should not be construed as limiting the involved claims.This Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.The first, the second equal words are used to indicate Title, and do not indicate any particular order.

Claims (10)

1. a kind of method for road feature extraction, which is characterized in that the described method includes:
Obtain the multiframe scene image of target road in different time period;
The scene cut model for obtaining different structure, passes through all scene images of scene cut model iterative extraction of each structure Segmentation information, obtain multiple groups target information, wherein the target information includes moving target information;
Moving target information in the multiple groups target information is filtered, the static object information of each group is obtained;
Fusion Features are carried out using static object information of the Nearest Neighbor with Weighted Voting mode to each group of same location of pixels, are obtained fused Characteristic image;
Link characteristic information is extracted from the fused characteristic image.
2. the method according to claim 1, wherein extracting road spy from the fused characteristic image After reference breath, comprising:
The link characteristic information and the moving target information are stored as structured document;
The target object for act of violating regulations occur is determined according to the structured file and preset road rule of conduct.
3. the method according to claim 1, wherein the scene cut model of different structure is obtained, by each The segmentation information of all scene images of scene cut model iterative extraction of structure, obtains multiple groups target information, comprising:
Obtain scene cut model list, wherein the scene cut model list includes the scene cut of multiple and different structures Model;
Preset iterative extraction is executed using first model in the scene cut model list to operate, and obtains first group of target Information;
Remaining model executes the preset iterative extraction behaviour that first model executes in scene cut model list described in iteration Make, obtains multiple groups target information.
4. according to the method described in claim 3, it is characterized in that, the preset iterative extraction operation includes:
Information extraction is split to scene image in target time section, obtains segmentation information, wherein the segmentation information includes The target category information of scene image in the target time section;
It is special according to the corresponding two-value gray scale of the target category information architecture of scene image scene image in the target time section Sign figure;
Extraction is merged to the target information region in the two-value gray feature figure.
5. according to the method described in claim 4, it is characterized in that, according to the target class of scene image in the target time section The corresponding two-value gray feature figure of other information architecture scene image, comprising:
The building mask images high with width with scene image in the target time section;
It is in the mask images according to the target category information of scene image in the target time section and default assignment condition Pixel carry out assignment, obtain gray feature figure;
Obtained according to the gray feature figure and C target category include C-1 target category two-value gray feature figure, In, C is positive integer.
6. according to the method described in claim 5, it is characterized in that, the default assignment condition meets following formula:
Wherein, M (x, y) indicates that position in mask images is the pixel value of pixel at (x, y), and C is indicated in the target time section The sum of the target category of scene image, NN (x, y) indicate that the position of scene image in the target time section is at (x, y) The classification value that pixel obtains after Algorithm of Scene.
7. according to the method described in claim 5, it is characterized in that, to the target information region in the two-value gray feature figure Merge extraction, comprising:
Judge the pixel value of pixel in the mask images and all pixels preset in the pixel in periphery pixel value whether It is equal, if so, merging the adjacent and equal value of pixel value, with the connected domain of the determination mask images;
The connected domain in target information region is extracted according to the connected domain of the two-value gray feature figure and the mask images.
8. the method according to claim 1, wherein the Nearest Neighbor with Weighted Voting mode meets the following conditions:
Wherein, m indicates the quantity of scene cut model, and n indicates the frame number of the scene image under target road, and M ' (x, y) expression makes With the fusion picture at the coordinate position (x, y) of the target road image obtained after m scene cut model, the fusion of n frame scene image Element value, C are the class number in scene image, Mkj(x, y) indicates target road in k-th of scene cut model jth frame scene figure As the feature class label in coordinate position (x, y), γ indicates the weight of class number,Table Show that m scene cut model n frame scene image is classified as the numbers matrix of C at coordinate (x, y).
9. a kind of equipment for road feature extraction, which is characterized in that the equipment includes:
One or more processors;And
It is stored with the memory of computer-readable instruction, the computer-readable instruction when executed executes the processor Such as the operation of any one of claims 1 to 8 the method.
10. a kind of computer-readable medium, is stored thereon with computer-readable instruction, the computer-readable instruction can be processed Device is executed to realize such as method described in any item of the claim 1 to 8.
CN201910803773.6A 2019-08-28 2019-08-28 A kind of method and apparatus for road feature extraction Pending CN110516610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803773.6A CN110516610A (en) 2019-08-28 2019-08-28 A kind of method and apparatus for road feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803773.6A CN110516610A (en) 2019-08-28 2019-08-28 A kind of method and apparatus for road feature extraction

Publications (1)

Publication Number Publication Date
CN110516610A true CN110516610A (en) 2019-11-29

Family

ID=68628580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803773.6A Pending CN110516610A (en) 2019-08-28 2019-08-28 A kind of method and apparatus for road feature extraction

Country Status (1)

Country Link
CN (1) CN110516610A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183244A (en) * 2020-09-11 2021-01-05 浙江大华技术股份有限公司 Scene establishing method and device, storage medium and electronic device
CN112528944A (en) * 2020-12-23 2021-03-19 杭州海康汽车软件有限公司 Image identification method and device, electronic equipment and storage medium
CN112633151A (en) * 2020-12-22 2021-04-09 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra crossing in monitored image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228138A (en) * 2016-07-26 2016-12-14 国网重庆市电力公司电力科学研究院 A kind of Road Detection algorithm of integration region and marginal information
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN106485715A (en) * 2016-09-09 2017-03-08 电子科技大学成都研究院 A kind of unstructured road recognition methods
CN109543520A (en) * 2018-10-17 2019-03-29 天津大学 A kind of lane line parametric method of Semantic-Oriented segmentation result

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN106228138A (en) * 2016-07-26 2016-12-14 国网重庆市电力公司电力科学研究院 A kind of Road Detection algorithm of integration region and marginal information
CN106485715A (en) * 2016-09-09 2017-03-08 电子科技大学成都研究院 A kind of unstructured road recognition methods
CN109543520A (en) * 2018-10-17 2019-03-29 天津大学 A kind of lane line parametric method of Semantic-Oriented segmentation result

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEI JIANG ET AL.: "DFNet: Semantic Segmentation on Panoramic Images with Dynamic Loss Weights and Residual Fusion Block", 《2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 *
赵翔: "基于视觉和毫米波雷达的车道级定位方法", 《上海交通大学学报》 *
黎华东: "智能交通中的违章识别算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183244A (en) * 2020-09-11 2021-01-05 浙江大华技术股份有限公司 Scene establishing method and device, storage medium and electronic device
CN112633151A (en) * 2020-12-22 2021-04-09 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra crossing in monitored image
CN112633151B (en) * 2020-12-22 2024-04-12 浙江大华技术股份有限公司 Method, device, equipment and medium for determining zebra stripes in monitoring images
CN112528944A (en) * 2020-12-23 2021-03-19 杭州海康汽车软件有限公司 Image identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110516610A (en) A kind of method and apparatus for road feature extraction
CN111160205B (en) Method for uniformly detecting multiple embedded types of targets in traffic scene end-to-end
CN111582261B (en) License plate recognition method and license plate recognition device for non-motor vehicle
WO2013186662A1 (en) Multi-cue object detection and analysis
CN110516514B (en) Modeling method and device of target detection model
CN103996041A (en) Vehicle color identification method and system based on matching
CN110390314A (en) A kind of visual perception method and apparatus
CN112347933A (en) Traffic scene understanding method and device based on video stream
CN110781980B (en) Training method of target detection model, target detection method and device
CN112287912A (en) Deep learning-based lane line detection method and device
Hinz et al. Car detection in aerial thermal images by local and global evidence accumulation
Kim et al. Effective traffic lights recognition method for real time driving assistance systemin the daytime
CN106295627A (en) For identifying the method and device of word psoriasis picture
CN113158954A (en) Automatic traffic off-site zebra crossing area detection method based on AI technology
Molina-Cabello et al. Vehicle type detection by convolutional neural networks
CN104809438B (en) A kind of method and apparatus for detecting electronic eyes
CN111292331B (en) Image processing method and device
CN113468938A (en) Traffic image recognition method and device, image processing equipment and readable storage medium
CN113989753A (en) Multi-target detection processing method and device
Palmer et al. Predicting the perceptual demands of urban driving with video regression
CN115482478B (en) Road identification method, device, unmanned aerial vehicle, equipment and storage medium
CN115861997B (en) License plate detection and recognition method for key foreground feature guided knowledge distillation
CN115482477B (en) Road identification method, device, unmanned aerial vehicle, equipment and storage medium
Qin et al. Vehicle route tracking system by cooperative license plate recognition on multi-peer monitor videos
Toha et al. DhakaNet: unstructured vehicle detection using limited computational resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20221101

AD01 Patent right deemed abandoned