CN109960959A - Method and apparatus for handling image - Google Patents

Method and apparatus for handling image Download PDF

Info

Publication number
CN109960959A
CN109960959A CN201711337034.XA CN201711337034A CN109960959A CN 109960959 A CN109960959 A CN 109960959A CN 201711337034 A CN201711337034 A CN 201711337034A CN 109960959 A CN109960959 A CN 109960959A
Authority
CN
China
Prior art keywords
lane line
line
location information
target image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711337034.XA
Other languages
Chinese (zh)
Other versions
CN109960959B (en
Inventor
李旭斌
傅依
文石磊
刘霄
丁二锐
孙昊
郭鹏
蒋子谦
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201711337034.XA priority Critical patent/CN109960959B/en
Publication of CN109960959A publication Critical patent/CN109960959A/en
Application granted granted Critical
Publication of CN109960959B publication Critical patent/CN109960959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses the method and apparatus for handling image.One specific embodiment of this method includes: the color of each pixel in multiple pixels based on target image, in multiple pixel in the location information of the target image, determines the location information of multiple pixels where at least one lane line;Linear fit is carried out to the location information of multiple pixels where at least one lane line, to determine the location information of at least two positions on the line segment where the target image, at least one lane line each lane line;Based on the location information of at least two positions on the line segment where each lane line, the region at each lane line place in the target image is determined, and determine the line style of lane line in each region.The embodiment of the present application improves the accuracy of identification lane line.

Description

Method and apparatus for handling image
Technical field
The invention relates to field of computer technology, and in particular to Internet technical field is more particularly, to located The method and apparatus for managing image.
Background technique
Lane line is the index line being most commonly encountered during vehicle is travelled on highway, can all be drawn on nearly all road There is lane line.The information for obtaining lane line can indicate driver, to preferably guarantee the traffic safety of vehicle.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for handling image.
In a first aspect, the embodiment of the present application provides a kind of method for handling image, comprising: based on target image The color of each pixel in multiple pixels determines at least one in location information of multiple pixels in target image The location information of multiple pixels where lane line;To the location informations of multiple pixels where at least one lane line into Row linear fit, to determine on the line segment where each lane line in target image, at least one lane line at least The location information of two positions;Based on the location information of at least two positions on the line segment where each lane line, determine each Lane line in the target image where region, and determine the line style of lane line in each region.
In some embodiments, Linear Quasi is carried out to the location information of multiple pixels where at least one lane line It closes, to determine at least two positions on the line segment where each lane line in target image, at least one lane line Location information, comprising: linear fit is carried out to the location informations of multiple pixels where at least one lane line, obtain to The function of the line segment where each lane line in a few lane line;For every lane line, determine target image, meet At least two location informations of the function of the line segment where the lane line.
In some embodiments, in multiple pixels based on target image each pixel color, in multiple pixels Point determines the location information of multiple pixels where at least one lane line in the location information of target image, comprising: will The location information input of the multiple pixels and multiple pixels of target image in the target image color classification trained in advance Model, wherein face of the color classification model to belong to lane line to the pixel progress color classification and output that include color The pixel and location information of color classification;At least one colour type that obtain the output of color classification model, belonging to lane line In multiple pixels of each colour type and multiple pixels location information in the target image of the colour type.
In some embodiments, the location information of at least two positions is the position of two endpoints of the line segment where lane line Confidence breath and the location information at the midpoint for two broadsides in region, the region where each lane line are rectangle;And it is based on The location information of at least two positions on line segment where each lane line, where determining each lane line in the target image Region, comprising: for every lane line, the midpoint of two broadsides based on the region where the line segment where the lane line Location information, in the target image determine width be predetermined width and length be two endpoint lines length rectangular area.
In some embodiments, the line style of lane line in each region is determined, comprising: by the region where each lane line Input line style disaggregated model trained in advance, obtains the line for the lane line for including in each region of line style disaggregated model output Type, wherein line style disaggregated model is classified to the line style to lane line.
In some embodiments, method further include: straight-line detection and/or frequency domain detection are carried out to the image formerly obtained, It determines the subgraph where the lane line of acquired image, and subgraph is determined as target image.
Second aspect, the embodiment of the present application provide a kind of for handling the device of image, comprising: determination unit, configuration For the color of each pixel in multiple pixels based on target image, believe in multiple pixels in the position of target image In breath, the location information of multiple pixels where at least one lane line is determined;Fitting unit is configured to at least one The location information of multiple pixels where lane line carries out linear fit, to determine in target image, at least one lane The location information of at least two positions on the line segment where each lane line in line;Line style determination unit, is configured to base Location information at least two positions on the line segment where each lane line determines each lane line institute in the target image Region, and determine each region in lane line line style.
In some embodiments, fitting unit, comprising: fitting module is configured to where at least one lane line The location information of multiple pixels carries out linear fit, obtains the line segment where each lane line at least one lane line Function;Determining module, is configured to for every lane line, determine target image, meet line segment where the lane line At least two location informations of function.
In some embodiments, determination unit, comprising: input module is configured to multiple pixels of target image The color classification model trained in advance with the location information input of multiple pixels in the target image, wherein color classification mould The pixel of colour type of the type to belong to lane line to the pixel progress color classification and output that include color and position Information;Output module is configured to every at least one colour type that obtain the output of color classification model, belonging to lane line The location information of multiple pixels of multiple pixels of a colour type and the colour type in the target image.
In some embodiments, the location information of at least two positions is the position of two endpoints of the line segment where lane line Confidence breath and the location information at the midpoint for two broadsides in region, the region where each lane line are rectangle;And line style Determination unit is further configured to: for every lane line, two based on the region where the line segment where the lane line The location information at the midpoint of broadside determines that width is predetermined width and length is the length of two endpoint lines in the target image Rectangular area.
In some embodiments, line style determination unit is further configured to: the region where each lane line is inputted Trained line style disaggregated model in advance obtains the line style for the lane line for including in each region of line style disaggregated model output, In, line style disaggregated model is classified to the line style to lane line.
In some embodiments, device further include: pretreatment unit is configured to carry out the image formerly obtained straight Line detection and/or frequency domain detection, determine the subgraph where the lane line of acquired image, and subgraph is determined as target figure Picture.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress It sets, for storing one or more programs, when one or more programs are executed by one or more processors, so that one or more A processor realizes the method such as any embodiment in the method for handling image.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the method such as any embodiment in the method for handling image when the program is executed by processor.
Method and apparatus provided by the embodiments of the present application for handling image, firstly, multiple pictures based on target image The color of each pixel in vegetarian refreshments determines at least one lane line in location information of multiple pixels in target image The location information of multiple pixels at place.Later, to the location information of multiple pixels where at least one lane line into Row linear fit, to determine on the line segment where each lane line in target image, at least one lane line at least The location information of two positions.Finally, the location information based at least two positions on the line segment where each lane line, really Region where determining each lane line in the target image, and determine the line style of lane line in each region.The embodiment of the present application Improve the accuracy of identification lane line.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for handling image of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for handling image of the application;
Fig. 4 is the flow chart according to another embodiment of the method for handling image of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for handling image of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for handling image of the application or the implementation of the device for handling image The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed on terminal device 101,102,103, such as automobile navigation application, Web browser applications, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be with display screen and communicate to connect various electronic equipments, including but It is not limited to vehicle, Vehicular navigation system, smart phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as to showing on terminal device 101,102,103 The line style of lane line provides the background server supported.Background server can divide the data such as the target image received The processing such as analysis, and the processing result line style of lane line (such as in target image) is fed back into terminal device.
It should be noted that the method provided by the embodiment of the present application for handling image is generally held by server 105 Row, correspondingly, the device for handling image is generally positioned in server 105.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for handling image according to the application is shown 200.The method for being used to handle image, comprising the following steps:
Step 201, in multiple pixels based on target image each pixel color, in multiple pixels in target In the location information of image, the location information of multiple pixels where at least one lane line is determined.
In the present embodiment, the method for handling image runs electronic equipment (such as service shown in FIG. 1 thereon Device) color of each pixel in multiple pixels based on target image determines extremely in the location information of multiple pixels The location information of multiple pixels where a few lane line.Target image can be to be obtained from local or other electronic equipments The image taken.Specifically, target image can be the top view and front view that road pavement is shot.Here more preferable in order to obtain Image analysis result, the target image of use can be high-resolution image.Here the location information of pixel be in In target image.There are multiple pixels in target image, each pixel has its location information in the target image.
Specifically, location information can be indicated in the form of coordinate.Lane line is the line drawn on vehicle driving road Item indicates the traveling-position of vehicle.May exist one or more than one lane line in the picture.In practice, Ke Yicong The pixel where lane line is determined in multiple pixels of target image, and then determines the position letter of lane line place pixel Breath.It, can be by carrying out face to each pixel because the color of lane line and other colors (such as road surface background color) have difference Color identification, and then determine the pixel where lane line.By the color of preset lane line, then it can determine and belong to vehicle The pixel of the color of diatom.And these pixels for belonging to lane line color are then the pixels where lane line, Jin Erke To determine the location information of these pixels as the location information of the pixel where lane line.
Step 202, linear fit is carried out to the location information of multiple pixels where at least one lane line, with determination Believe the position of at least two positions on the line segment where target image, each lane line at least one lane line Breath.
In the present embodiment, above-mentioned electronic equipment believes the position of multiple pixels where above-mentioned at least one lane line Breath carries out linear fit, at least two on the line segment where each lane line in above-mentioned at least one lane line of determination The location information set.Here location information is the location information in target image, that is, indicated by the two location informations Position is fallen in the target image.By linear fit, at least one line segment where available lane line, while also just obtaining Line segment quantity present at least one line segment.The quantity of line segment can be determined as to the quantity of lane line.
In practice, the line segment being fitted can be using the location information of at least two positions fallen on line segment come table Show, such as coordinate (x1, y1) and (x2, y2), it can also be indicated using the function of line segment.
Step 203, the location information based at least two positions on the line segment where each lane line, determines each vehicle Diatom in the target image where region, and determine each region in lane line line style.
In the present embodiment, position of the above-mentioned electronic equipment based at least two positions on the line segment where each lane line Confidence breath, the region where determining each lane line in above-mentioned target image.Later in identified each region, determine The line style of lane line.Here the location information of at least two positions plays positioning action to lane line, can by location information To navigate to lane line in the target image, in order to the line style of subsequent determining lane line.The line style of lane line refers to lane line institute The pattern of presentation.For example, solid line, dotted line etc..
Specifically, it is determined that each lane line in the target image where region can use various ways.For example, can be with Using the line of the location information of this at least two position as the center line in region, the area of specified width, which width is taken in the two sides of center line Domain.Two parallel lines of the line segment where the location information of at least two positions can also be done, this two parallel lines distances are at least The distance of line segment where the location information of two positions is distance to a declared goal.It later can be by the region between this two parallel lines As the region where lane line.
After the region where each lane line has been determined, then the line style of lane line in each region can be determined.It can Lane line in each region to be compared with the Standard linear figure of lane line.The standard that will be matched with lane line in region Line style of the line style as the region lane line in line illustration.It can determine that the two matches when similarity is higher than threshold value.This Outside, can also be sorted out using line style of the disaggregated model to lane line, to determine the line style of lane line in region.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling image of the present embodiment Figure.In the application scenarios of Fig. 3, color of the electronic equipment 301 based on each pixel in multiple pixels of image 302, Multiple pixels determine the location information of multiple pixels where at least one lane line in the location information of image 302 303.Linear fit is carried out to the location information of multiple pixels where at least one lane line, to determine in target image , the location informations 304 of 2 positions on line segment where each lane line in 3 lane lines.Based on each lane line institute Line segment on 2 positions location information 304, determine each lane line in the target image where region 305, and really Determine the line style 306 of lane line in each region.
The method provided by the above embodiment of the application improves the accuracy of identification lane line, can determine lane line Number and line style.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling image.The use In the process 400 of the method for processing image, comprising the following steps:
Step 401, straight-line detection and/or frequency domain detection are carried out to the image formerly obtained, determines the vehicle of acquired image Subgraph where diatom, and subgraph is determined as target image.
In the present embodiment, electronic equipment carries out straight-line detection and/or frequency domain detection to the image formerly obtained, with determination Subgraph where the lane line of acquired image, and subgraph is determined as target image.Subgraph is the figure formerly obtained A part as in.Here image is usually front view or top view.Straight line inspection can be carried out to front view and top view It surveys to detect straight line in the picture, then determines the position where straight line, image is split by the position.Such as it can To be detected using Hough transformation.Frequency domain detection can also be used, i.e., high-frequency signal and low frequency signal in detection image are low Lane line (such as sky in front view) is not present in the corresponding topography of frequency signal, then the topography is not being generated Subgraph in.In addition it is also possible to carry out above two detection to image, to obtain better detection effect, more accurately really Sub-image.
Step 402, the location information of the multiple pixels and multiple pixels of target image in the target image is inputted Trained color classification model in advance.
In the present embodiment, above-mentioned electronic equipment is by multiple pixels of target image and above-mentioned multiple pixels in target The color classification model that location information input in image is trained in advance.Above-mentioned color classification model is to the picture including color Vegetarian refreshments carries out color classification and exports the pixel and location information of each colour type.Specifically, color classification model can be with It is two disaggregated models, for example, the color of pixel can be divided into road surface background color and non-road surface background color.Color classification model can also To be three disaggregated models etc..For example, the color of pixel can be divided into road surface background color, white and yellow, white therein It is the color of lane line with yellow.
Specifically, specifically, it can use the sample of various colors that is a large amount of, being labelled with colour type to color Disaggregated model is trained.Color classification model can be by support vector machines (Support Vector Machine, SVM), The training of the classifiers (Classifier) such as model-naive Bayesian (Naive Bayesian Model, NBM) model obtains.This Outside, color classification model is also possible to based on made of certain classification functions (such as softmax function etc.) in advance training.
Step 403, each color at least one colour type that obtain the output of color classification model, belonging to lane line The location information of multiple pixels of multiple pixels of classification and the colour type in the target image.
In the present embodiment, above-mentioned electronic equipment obtains belonging to lane line at least by what above-mentioned color classification model exported Multiple pixels of each colour type and multiple pixels of each colour type are in target image in one colour type In location information.Above-mentioned color classification model outputs the pixel of each colour type, and outputs each color class The location information of other pixel.There are the colour types of lane line in each classification of output.It can be to the difference of output The pixel of colour type and the location information of pixel be distinguish using different labels, in order to which determination belongs to lane The pixel and location information of the colour type of line.
Step 404, linear fit is carried out to the location informations of multiple pixels where at least one lane line, obtain to The function of the line segment where each lane line in a few lane line.
In the present embodiment, above-mentioned electronic equipment believes the position of multiple pixels where above-mentioned at least one lane line Breath carries out linear fit.By the function for being fitted the line segment where obtaining each lane line at least one above-mentioned lane line. Here the function obtained is linear linear function, indicates the line segment where each lane line.Variable in function can be deposited In value range, so that the coordinate points in function are fallen in a line segment, while also falling in the target image.
Step 405, for every lane line, function determining target image, meeting line segment where the lane line At least two location informations.
In the present embodiment, for every lane line, above-mentioned electronic equipment is determined for compliance with the line segment where the lane line At least two location informations of function, while the location information of at least two positions here is also the point in target image.
Step 406, for every lane line, in two broadsides based on the region where the line segment where the lane line Point location information, in the target image determine width be predetermined width and length be two endpoint lines length rectangle region Domain.
In the present embodiment, the location information of at least two positions is the position of two endpoints of the line segment where lane line The location information at information and the midpoint for two broadsides in region, the region where each lane line are rectangle.For every vehicle Diatom, the location information at midpoint of the above-mentioned electronic equipment based on two broadsides in region determine rectangular area in the target image.This In region be region where line segment where the region that will be determined and this lane line.The width of rectangular area is Predetermined width, length are the length of two endpoint lines of the line segment where the lane line.Obtain the broadside of rectangular area two Midpoint location information, it can rectangular area is positioned.By the length and width of rectangular area, then can determine The size of rectangle.Each rectangle has long and width, two broadsides here to refer to two opposite side at the wide place of rectangle.
Step 407, the line style disaggregated model that the region input where each lane line is trained in advance, obtains line style classification The line style for the lane line for including in each region of model output.
In the present embodiment, above-mentioned electronic equipment classifies the line style trained in advance of the region input where each lane line Model obtains the line style for the lane line for including in each region of above-mentioned line style disaggregated model output.Wherein, line style disaggregated model Classify to the line style to lane line.Above-mentioned electronic equipment can be subject to area to different lane line line styles by the model Point, so that it is determined that lane line line style belonging to the lane line in region out.
Specifically, it can use the sample of various lane line line styles that are a large amount of, being labelled with lane line line style classification Line style disaggregated model is trained.Line style disaggregated model can be by support vector machines (Support Vector Machine, SVM), the classifiers (Classifier) such as model-naive Bayesian (Naive Bayesian Model, NBM) model training obtains 's.In addition, line style disaggregated model is also possible to, based on certain classification functions (such as softmax function etc.), training is formed in advance 's.
The present embodiment utilizes color classification model, can determine the color of lane line, improves the accurate of Lane detection Degree.Meanwhile the present embodiment reduces the time-consuming of Lane detection, together by the subgraph where obtaining lane line in image When further improve the accuracy of Lane detection.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for handling figure One embodiment of the device of picture, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, the device 500 for handling image of the present embodiment comprises determining that unit 501, fitting unit 502 With line style determination unit 503.Wherein it is determined that unit 501, is configured to each pixel in multiple pixels based on target image The color of point determines multiple pixels where at least one lane line in location information of multiple pixels in target image The location information of point.Fitting unit 502, be configured to the location informations of multiple pixels where at least one lane line into Row linear fit, to determine on the line segment where each lane line in target image, at least one lane line at least The location information of two positions.Line style determination unit 503 is configured to based at least two on the line segment where each lane line The location information of a position, the region where determining each lane line in the target image, and determine lane line in each region Line style.
It in the present embodiment, can be based on the multiple of target image for handling the determination unit 501 of the device 500 of image The color of each pixel in pixel determines more where at least one lane line in the location information of multiple pixels The location information of a pixel.Target image can be the image obtained from local or other electronic equipments.Specifically, target Image can be the top view and front view that road pavement is shot.Here in order to obtain better image analysis as a result, using Target image can be high-resolution image.Here the location information of pixel is in target image.Target figure There are multiple pixels as in, each pixel has its location information in the target image.
In the present embodiment, fitting unit 502 believes the position of multiple pixels where above-mentioned at least one lane line Breath carries out linear fit, at least two on the line segment where each lane line in above-mentioned at least one lane line of determination The location information set.Here location information is the location information in target image, that is, indicated by the two location informations Position is fallen in the target image.By linear fit, at least one line segment where available lane line, while also just obtaining Line segment quantity present at least one line segment.The quantity of line segment can be determined as to the quantity of lane line.
In the present embodiment, line style determination unit 503 is based at least two positions on the line segment where each lane line Location information, determine each lane line in above-mentioned target image where region.Later in identified each region, Determine the line style of lane line.Here the location information of at least two positions plays positioning action to lane line, is believed by position Breath, can navigate to lane line, in the target image in order to the line style of subsequent determining lane line.The line style of lane line refers to lane The pattern that line is presented.For example, solid line, dotted line etc..
In some optional implementations of the present embodiment, fitting unit, comprising: fitting module is configured to extremely The location information of multiple pixels where a few lane line carries out linear fit, obtains each item at least one lane line The function of line segment where lane line;Determining module, is configured to for every lane line, determine target image, meet this At least two location informations of the function of the line segment where lane line.
In some optional implementations of the present embodiment, determination unit, comprising: input module is configured to mesh The location information input of the multiple pixels and multiple pixels of logo image in the target image color classification mould trained in advance Type, wherein color of the color classification model to belong to lane line to the pixel progress color classification and output that include color The pixel and location information of classification;Output module, be configured to obtain the output of color classification model, belong to lane line extremely Multiple pixels of multiple pixels of each colour type and the colour type are in the target image in a few colour type Location information.
In some optional implementations of the present embodiment, the location information of at least two positions is where lane line The location information of two endpoints of line segment and the location information at the midpoint for two broadsides in region, the area where each lane line Domain is rectangle;And line style determination unit is further configured to: for every lane line, based on the line segment where the lane line The location information at the midpoint of two broadsides in the region at place determines that width is predetermined width and length is two in the target image The rectangular area of the length of a endpoint line.
In some optional implementations of the present embodiment, line style determination unit is further configured to: by each vehicle Region input where diatom line style disaggregated model trained in advance, obtain include in each region of line style disaggregated model output Lane line line style, wherein line style disaggregated model is classified to the line style to lane line.
In some optional implementations of the present embodiment, device further include: pretreatment unit is configured to first The image of acquisition carries out straight-line detection and/or frequency domain detection, determines the subgraph where the lane line of acquired image, and will be sub Image is determined as target image.
In some optional implementations of the present embodiment, fitting unit, comprising: fitting module is configured to extremely The location information of multiple pixels where a few lane line carries out linear fit, obtains each item at least one lane line The function of line segment where lane line;Determining module, is configured to for every lane line, determine target image, meet this At least two location informations of the function of the line segment where lane line.
In some optional implementations of the present embodiment, determination unit, comprising: input module is configured to mesh The location information input of the multiple pixels and multiple pixels of logo image in the target image color classification mould trained in advance Type, wherein color of the color classification model to belong to lane line to the pixel progress color classification and output that include color The pixel and location information of classification;Output module, be configured to obtain the output of color classification model, belong to lane line extremely Multiple pixels of multiple pixels of each colour type and the colour type are in the target image in a few colour type Location information.
In some optional implementations of the present embodiment, the location information of at least two positions is where lane line The location information of two endpoints of line segment and the location information at the midpoint for two broadsides in region, the area where each lane line Domain is rectangle;And line style determination unit is further configured to: for every lane line, based on the line segment where the lane line The location information at the midpoint of two broadsides in the region at place determines that width is predetermined width and length is two in the target image The rectangular area of the length of a endpoint line.
In some optional implementations of the present embodiment, line style determination unit is further configured to: by each vehicle Region input where diatom line style disaggregated model trained in advance, obtain include in each region of line style disaggregated model output Lane line line style, wherein line style disaggregated model is classified to the line style to lane line.
In some optional implementations of the present embodiment, the device further include: pretreatment unit, be configured to The image first obtained carries out straight-line detection and/or frequency domain detection, determines the subgraph where the lane line of acquired image, and will Subgraph is determined as target image.
Below with reference to Fig. 6, it illustrates the computer systems 600 for the electronic equipment for being suitable for being used to realize the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, function to the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;Including such as liquid crystal Show the output par, c 607 of device (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;And including such as LAN The communications portion 609 of the network interface card of card, modem etc..Communications portion 609 is executed via the network of such as internet Communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as disk, CD, magneto-optic Disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to from the computer program root read thereon According to needing to be mounted into storage section 608.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes Above-mentioned function.It should be noted that the computer-readable medium of the application can be computer-readable signal media or calculating Machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but it is unlimited In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or any above combination.It calculates The more specific example of machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, portable of one or more conducting wires Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, computer readable storage medium can be it is any include or storage program Tangible medium, which can be commanded execution system, device or device use or in connection.And in this Shen Please in, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by Instruction execution system, device or device use or program in connection.The journey for including on computer-readable medium Sequence code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include determination unit, fitting unit and line style determination unit.Wherein, the title of these units is not constituted to this under certain conditions The restriction of unit itself, for example, determination unit is also described as " determining multiple pixels where at least one lane line Location information unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: the color of each pixel in multiple pixels based on target image, in multiple pixels in the position of target image In information, the location information of multiple pixels where at least one lane line is determined;To more where at least one lane line The location information of a pixel carries out linear fit, to determine each lane in target image, at least one lane line The location information of at least two positions on line segment where line;Based at least two on the line segment where each lane line The location information set, the region where determining each lane line in the target image, and determine the line of lane line in each region Type.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (14)

1. a kind of method for handling image, comprising:
The color of each pixel in multiple pixels based on target image, in the multiple pixel in the target image Location information in, determine the location information of multiple pixels where at least one lane line;
Linear fit is carried out to the location information of multiple pixels where at least one lane line, to determine in the target figure As, the location informations of at least two positions on line segment where each lane line at least one lane line;
Based on the location information of at least two positions on the line segment where each lane line, determine each lane line in the mesh Region where in logo image, and determine the line style of lane line in each region.
2. the method according to claim 1 for handling image, wherein described to more where at least one lane line The location information of a pixel carries out linear fit, to determine in the target image, described at least one lane line The location information of at least two positions on line segment where each lane line, comprising:
Linear fit is carried out to the location information of multiple pixels where at least one lane line, obtains described at least one The function of the line segment where each lane line in lane line;
For every lane line, at least two positions of function determining target image, meeting line segment where the lane line Information.
3. the method according to claim 1 for handling image, wherein multiple pixels based on target image In the color of each pixel determine at least one vehicle in the multiple pixel in the location information of the target image The location information of multiple pixels where diatom, comprising:
The location information input of the multiple pixels and the multiple pixel of target image in the target image is preparatory Trained color classification model, wherein the color classification model is to carry out color classification simultaneously to the pixel for including color Output belongs to the pixel and location information of the colour type of lane line;
Each colour type is more at least one colour type that obtain the color classification model output, belonging to lane line Location information of the multiple pixels of a pixel and the colour type in the target image.
4. the method according to claim 1 for handling image, wherein the location information of at least two position is The location information of two endpoints of the line segment where lane line and the location information at the midpoint for two broadsides in region, each vehicle Region where diatom is rectangle;And
The location information of at least two positions on the line segment based on where each lane line determines each lane line in institute Region where stating in target image, comprising:
For every lane line, the position at the midpoint of two broadsides based on the region where the line segment where the lane line is believed Breath, in the target image determine width be predetermined width and length be described two endpoint lines length rectangle region Domain.
5. the method according to claim 1 for handling image, wherein the line of lane line in described determining each region Type, comprising:
By the line style disaggregated model trained in advance of the region input where each lane line, the line style disaggregated model output is obtained Each region in include lane line line style, wherein the line style disaggregated model divides to the line style to lane line Class.
6. the method according to claim 1 for handling image, wherein the method also includes:
Straight-line detection and/or frequency domain detection are carried out to the image formerly obtained, determine the son where the lane line of acquired image Image, and the subgraph is determined as target image.
7. a kind of for handling the device of image, comprising:
Determination unit is configured to the color of each pixel in multiple pixels based on target image, in the multiple picture Vegetarian refreshments determines the location information of multiple pixels where at least one lane line in the location information of the target image;
Fitting unit is configured to carry out linear fit to the location information of multiple pixels where at least one lane line, To determine at least two on the line segment where each lane line in the target image, described at least one lane line The location information of position;
Line style determination unit is configured to the location information based at least two positions on the line segment where each lane line, Region where determining each lane line in the target image, and determine the line style of lane line in each region.
8. according to claim 7 for handling the device of image, wherein the fitting unit, comprising:
Fitting module is configured to carry out Linear Quasi to the location information of multiple pixels where at least one lane line It closes, obtains the function of the line segment where each lane line at least one lane line;
Determining module, is configured to for every lane line, letter determining target image, meeting line segment where the lane line At least two several location informations.
9. according to claim 7 for handling the device of image, wherein the determination unit, comprising:
Input module is configured to multiple pixels of target image and the multiple pixel in the target image Location information input color classification model trained in advance, wherein the color classification model is to the pixel including color Point carries out color classification and exports the pixel and location information for belonging to the colour type of lane line;
Output module is configured at least one colour type that obtain the color classification model output, belonging to lane line In each colour type multiple pixels and the colour type multiple pixels in the target image position letter Breath.
10. according to claim 7 for handling the device of image, wherein the location information of at least two position For two endpoints of the line segment where lane line location information and be region two broadsides midpoint location information, each item Region where lane line is rectangle;And
The line style determination unit is further configured to:
For every lane line, the position at the midpoint of two broadsides based on the region where the line segment where the lane line is believed Breath, in the target image determine width be predetermined width and length be described two endpoint lines length rectangle region Domain.
11. according to claim 7 for handling the device of image, wherein the line style determination unit further configures For:
By the line style disaggregated model trained in advance of the region input where each lane line, the line style disaggregated model output is obtained Each region in include lane line line style, wherein the line style disaggregated model divides to the line style to lane line Class.
12. according to claim 7 for handling the device of image, wherein described device further include:
Pretreatment unit is configured to carry out straight-line detection and/or frequency domain detection to the image formerly obtained, determines acquired figure As lane line where subgraph, and the subgraph is determined as target image.
13. a kind of electronic equipment, comprising:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 6.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein when the program is executed by processor Realize such as method as claimed in any one of claims 1 to 6.
CN201711337034.XA 2017-12-14 2017-12-14 Method and apparatus for processing image Active CN109960959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711337034.XA CN109960959B (en) 2017-12-14 2017-12-14 Method and apparatus for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711337034.XA CN109960959B (en) 2017-12-14 2017-12-14 Method and apparatus for processing image

Publications (2)

Publication Number Publication Date
CN109960959A true CN109960959A (en) 2019-07-02
CN109960959B CN109960959B (en) 2020-04-03

Family

ID=67017831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711337034.XA Active CN109960959B (en) 2017-12-14 2017-12-14 Method and apparatus for processing image

Country Status (1)

Country Link
CN (1) CN109960959B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688971A (en) * 2019-09-30 2020-01-14 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium
CN113066153A (en) * 2021-04-28 2021-07-02 浙江中控技术股份有限公司 Method, device and equipment for generating pipeline flow chart and storage medium
CN113688721A (en) * 2021-08-20 2021-11-23 北京京东乾石科技有限公司 Method and device for fitting lane line

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212287B1 (en) * 1996-10-17 2001-04-03 Sgs-Thomson Microelectronics S.R.L. Method for identifying marking stripes of road lanes
CN102298693A (en) * 2011-05-18 2011-12-28 浙江大学 Expressway bend detection method based on computer vision
CN103295420A (en) * 2013-01-30 2013-09-11 吉林大学 Method for recognizing lane line
CN104318258A (en) * 2014-09-29 2015-01-28 南京邮电大学 Time domain fuzzy and kalman filter-based lane detection method
CN106203273A (en) * 2016-06-27 2016-12-07 开易(北京)科技有限公司 The lane detection system of multiple features fusion, method and senior drive assist system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212287B1 (en) * 1996-10-17 2001-04-03 Sgs-Thomson Microelectronics S.R.L. Method for identifying marking stripes of road lanes
CN102298693A (en) * 2011-05-18 2011-12-28 浙江大学 Expressway bend detection method based on computer vision
CN103295420A (en) * 2013-01-30 2013-09-11 吉林大学 Method for recognizing lane line
CN104318258A (en) * 2014-09-29 2015-01-28 南京邮电大学 Time domain fuzzy and kalman filter-based lane detection method
CN106203273A (en) * 2016-06-27 2016-12-07 开易(北京)科技有限公司 The lane detection system of multiple features fusion, method and senior drive assist system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鞠乾翱: "基于机器视觉的快速车道线辨识研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688971A (en) * 2019-09-30 2020-01-14 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
WO2021063228A1 (en) * 2019-09-30 2021-04-08 上海商汤临港智能科技有限公司 Dashed lane line detection method and device, and electronic apparatus
CN110688971B (en) * 2019-09-30 2022-06-24 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium
CN113066153A (en) * 2021-04-28 2021-07-02 浙江中控技术股份有限公司 Method, device and equipment for generating pipeline flow chart and storage medium
CN113688721A (en) * 2021-08-20 2021-11-23 北京京东乾石科技有限公司 Method and device for fitting lane line
CN113688721B (en) * 2021-08-20 2024-03-05 北京京东乾石科技有限公司 Method and device for fitting lane lines

Also Published As

Publication number Publication date
CN109960959B (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN108073910B (en) Method and device for generating human face features
CN109960959A (en) Method and apparatus for handling image
CN108090916B (en) Method and apparatus for tracking the targeted graphical in video
CN109063653A (en) Image processing method and device
CN108898185A (en) Method and apparatus for generating image recognition model
CN104239465B (en) A kind of method and device scanned for based on scene information
CN109344762A (en) Image processing method and device
CN109034069A (en) Method and apparatus for generating information
CN108510472A (en) Method and apparatus for handling image
CN109308681A (en) Image processing method and device
CN108345387A (en) Method and apparatus for output information
CN110390237A (en) Processing Method of Point-clouds and system
CN110533055A (en) A kind for the treatment of method and apparatus of point cloud data
CN108364209A (en) Methods of exhibiting, device, medium and the electronic equipment of merchandise news
CN108734185A (en) Image verification method and apparatus
CN107742128A (en) Method and apparatus for output information
CN108491825A (en) information generating method and device
CN110349158A (en) A kind of method and apparatus handling point cloud data
CN109242801A (en) Image processing method and device
CN108595448A (en) Information-pushing method and device
CN108960110A (en) Method and apparatus for generating information
CN108182457A (en) For generating the method and apparatus of information
CN108882025A (en) Video frame treating method and apparatus
CN109214501A (en) The method and apparatus of information for identification
CN109118456A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant