Disclosure of Invention
In view of the above problems, the present invention provides a lane line detection method, and the specific implementation manner is as follows.
The embodiment of the invention provides a lane line detection method, which comprises the following steps:
step 1: acquiring an image in front of a vehicle, and converting the image into a gray scale image;
step 2: carrying out coarse positioning on the lane line by utilizing the gray information of the image, and recording a coarse positioning point;
and step 3: carrying out fine positioning on the coarse positioning point, and reserving fine points;
and 4, step 4: classifying the fine points to obtain a plurality of straight line classes;
and 5: and acquiring the lane line according to the plurality of straight line types.
In one embodiment of the present invention, the step 2 comprises:
step 21, dividing the image into a plurality of strips equally by utilizing the gray information of the image, and dividing each strip into a plurality of pixel blocks equally;
step 22, summing the pixel gray values of each pixel block, and obtaining Sum value of each pixel block;
step 23, obtaining gradient data of each strip according to Sum value of each pixel block;
step 24, searching a maximum value point and a minimum value point in the gradient data of each strip;
and 25, recording the maximum value point and the minimum value point as the rough positioning point.
In one embodiment of the present invention, the step 3 comprises:
step 31, selecting a rough positioning point Pi(x, y) at said coarse localization point PiTaking M pixel lines up and down and xOffset pixel points left and right respectively by taking (x, y) as a center;
step 32, performing convolution operation on the coarse positioning point of each pixel row to obtain a pixel extreme point of each pixel row;
step 33, averaging X1 the abscissa of the plurality of pixel extreme points in the upper M pixel rows, and averaging X2 the abscissa of the plurality of pixel extreme points in the lower M pixel rows;
step 34, judging whether the absolute value of the difference value between each pixel extreme point in the M pixel rows and the average value X1 is greater than a preset value, if so, discarding the pixel extreme points, and if not, reserving the pixel extreme points;
step 35, respectively judging whether the absolute value of the difference between each pixel extreme point in the next M pixel rows and the average value X2 is greater than a preset value, if so, discarding the pixel extreme points, and if not, retaining the pixel extreme points;
step 36, sequentially executing steps 31 to 35 on all the rough positioning points;
and step 37, converting the retained coordinates of the plurality of pixel extreme points into fine point coordinates.
In one embodiment of the present invention, the step 36 comprises:
the fine point PiaThe abscissa of (x, y) is:
Pia.x=Pim.x+(sumx1+sumx2)/(N1+N2)
wherein, Pim.xThe value of the abscissa of the pixel extreme point is reserved; sumx1 is the sum of the abscissas of the pixel extreme points reserved in the upper M pixel rows; n1 is the number of the pixel extreme points reserved in the upper M pixel rows; sumx2 is the sum of the abscissas of the pixel extremum points reserved in the next M pixel rows; n2 is the number of the pixel extremum points reserved in the next M pixel rows.
In one embodiment of the present invention, the step 4 comprises:
step 41, taking the ith point in the plurality of fine points as a reference point, and determining a first straight line type according to the reference point, wherein i is 1,2,3 or 4 … …;
step 42, comparing the (i + 1) th fine point with the reference point;
if the (i + 1) th fine point is in the range determined by the reference point, recording the (i + 1) th fine point in the first straight line class, and taking the (i + 1) th fine point as a new reference point of the first straight line class;
if the (i + 1) th fine point is not in the range determined by the reference point, adding a new straight line class, and taking the (i + 1) th fine point as the reference point of the new straight line class;
and 43, comparing the plurality of non-classified fine points with the reference points of the first straight line class and the reference points of the newly added straight line class respectively until the classification of the plurality of fine points is finished.
In one embodiment of the present invention, the step 42 includes:
step 421, judging whether the Y value of the (i + 1) th fine point coordinate is equal to the Y value of the reference point coordinate;
if the number of the fine points is equal to the number of the fine points, the (i + 1) th fine point is not in the range determined by the reference point;
if not, go to step 422;
step 422, judging whether the slope of the (i + 1) th fine point is within a preset slope range;
if the number of the fine points is within the preset slope range, the (i + 1) th fine point is within the range determined by the reference point;
if the number of the fine points is not in the preset slope range, the (i + 1) th fine point is not in the range determined by the reference point.
In an embodiment of the present invention, the preset slope range is a slope range corresponding to a preset region in which the reference point coordinate is located, and includes:
acquiring a slope mean value Kavg of a preset area where the reference point is located according to a preset slope information configuration table;
multiplying the slope mean value by a maximum slope coefficient and a minimum slope coefficient to obtain a slope range in the area where the reference point is located;
wherein the maximum slope coefficient is 1.2 and the minimum slope coefficient is 0.8.
In one embodiment of the present invention, the step 5 comprises:
screening the plurality of straight line classes to determine a left straight line class and a right straight line class of the lane line;
dividing the points on the left straight line into an upper part and a lower part according to the number of the pictures, and respectively calculating the average coordinate (x)Upper left of,yUpper left of)、(xLeft lower part,yLeft lower part);
Dividing the points on the right straight line into an upper part and a lower part according to the number of the points, and respectively calculating the average coordinate (x)Upper right part,yUpper right part)、(xLower right,yLower right);
Calculating the intermediate coordinate xOn the upper part=(xUpper left of+xUpper right part)/2,yOn the upper part=(yUpper left of+yUpper right part)/2;xLower part=(xLeft lower part+xLower right)/2,yLower part=(yLeft lower part+yLower right)/2;
Connecting said intermediate coordinates (x)On the upper part,yOn the upper part) And (x)Lower part,yLower part) And obtaining the lane line.
The invention has the beneficial effects that:
1. according to the technical scheme, longitudinal stripe block downsampling is carried out on the whole image, the characteristics of longitudinal arrangement are utilized when edges are detected, and the lane in each stripe changes violently in the horizontal direction, so that horizontal gradient operation is carried out on image block data in the stripes by adopting a simple gradient operator, and suspicious edge points of the lane can be roughly detected by means of the characteristics that gradient changes of the left edge and the right edge of the lane are opposite and gradient extreme points appear in pairs in the lane width field, so that the interference of edge information in non-longitudinal trend arrangement is reduced, and the real-time performance of the algorithm and the reliability of the lane are improved.
2. The method comprises the steps of obtaining fine point positions of lane edges after coarse positioning points are subjected to fine positioning, then performing table lookup judgment according to slope information between the fine points and data in a preset lane configuration table, classifying the points meeting the conditions into different linear classes, selecting the most linear classes from the classified points which are greater than the threshold point number as the left side and the right side of a lane line, and averaging the upper position and the lower position of the linear classes to obtain the lane line so as to finish detection of the lane line. Therefore, the calculation of the interference points which are not arranged in the longitudinal trend is not needed, so that a large amount of useless calculation is reduced, the processing pressure of the system is reduced to a great extent, and the processing speed is improved.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
As shown in fig. 1 to 7, fig. 1 is a flowchart of a lane line detection method according to an embodiment of the present invention; FIG. 2 is a schematic diagram of the image segmentation and blocking according to an embodiment of the present invention; FIG. 3 is a schematic representation of gradient data for a single stripe in an embodiment of the present invention; FIG. 4 is a schematic diagram of a coarse positioning point according to an embodiment of the present invention; FIG. 5 is a schematic diagram of an embodiment of the present invention after fine positioning; FIG. 6 is a diagram illustrating an embodiment of the present invention after classification; FIG. 7 is a schematic view of a line display according to an embodiment of the present invention. The present embodiment describes in more detail the working principle of the lane line detection method as follows.
As shown in fig. 1, in the lane line detection method provided in the embodiment of the present invention, a processed image is first converted into a grayscale image, and then rough positioning, fine positioning, point classification, and linear display processing are sequentially performed. The method specifically comprises the following steps:
step 1: acquiring an image in front of a vehicle, and converting the image into a gray scale image;
step 2: carrying out coarse positioning on the lane line by utilizing the gray information of the image, and recording a coarse positioning point;
and step 3: carrying out fine positioning on the coarse positioning point, and reserving fine points;
and 4, step 4: classifying the fine points to obtain a plurality of straight line classes;
and 5: and acquiring the lane line according to the plurality of straight line types.
< coarse positioning >
Further, the rough positioning of the lane line is performed by using the gray information of the image, and the rough positioning point is recorded, including:
step 21, dividing the image into a plurality of strips equally by utilizing the gray information of the image, and dividing each strip into a plurality of pixel blocks equally;
step 22, summing the pixel gray values of each pixel block, and obtaining Sum value of each pixel block;
step 23, obtaining gradient data of each strip according to Sum value of each pixel block;
step 24, searching a maximum value point and a minimum value point in the gradient data of each strip;
and 25, recording the maximum value point and the minimum value point as the rough positioning point.
Specifically, the method comprises the following steps:
for coarse positioning, for W × H image, taking 720 × 200 as an example, the image is divided into a plurality of strips (usually selected 40), each strip contains H rows of image data (usually selected 5), and each strip is divided into W (usually selected 5 or 10) pixel wide blocks, which are divided into W/W blocks. After the block division, the following steps are sequentially carried out:
1. and summing pixel gray values of each block with the size of W x h in each strip to obtain Sum, namely obtaining W/W dimension data Sum of each strip.
2. And (3) performing horizontal gradient calculation on the summed data Sum [ i ] of each block i, and performing convolution operation by using image data and a template, wherein the template can be a simple operator such as [ -1,0,1] or [1,0, -1 ].
Taking [ -1,0,1] as an example, the gradient data of a single band is obtained as:
Diff[i]=Sum[i+1]-Sum[i-1],i=1,2,3...(W/w-1) (1)
taking the image of 720 × 200 size in fig. 2 as an example, taking the upper left corner as the origin of coordinates, partitioning the stripe of which the starting point is y 50 in the figure according to the size of h 5 and w 5 to obtain the sum of pixel values, and then performing horizontal gradient calculation on each block data of the stripe by using formula (1), so as to obtain gradient data as shown in fig. 3;
3. and sequentially searching a plurality of gradient pole pairs in the gradient data. The suspicious edge points of the lane can be roughly detected by means of the characteristics that the gradient changes of the left edge and the right edge of the lane are opposite and the gradient extreme points appear in pairs in the lane width field.
In the embodiment of the invention, a template [ -1,0,1] is adopted]When calculating the gradient, the left lane edge point is the maximum positive gradient value, the right edge point is the minimum negative gradient value, the gradient amplitude of the extreme value is larger than the MaxMinTh threshold value, and the number of the blocks separated by the maximum value and the minimum value is in the threshold value range of the lane, namely, for each extreme value large point, the column block index is set as nl (left lane), when the gradient amplitude is larger than MaxMinTh, the minimum value point with the gradient amplitude larger than the MaxMinTh is searched in the width threshold value of the width of the right side width, and the column block index is set asnr(right lane), if a pair of minimum maximum value points (nl, nr) is found, the pair of points is considered as a suspicious lane, and the found pair of extreme value points (nl, nr) is according to the formula (2):
mapping back to original image to obtain point coordinates P of left side and right side of lane coarse positioningi(x,y),Pi+1(x, y), storing the position information of the point, and continuously searching other extreme point pairs of the current stripe according to the 3 rd point. Otherwise, deleting the found maximum value point from the candidate points, and then sequentially searching the gradient data of the strip for the maximum value pair.
On the other hand, gradient calculation can be performed by taking [1,0, -1] as an example, specifically: and searching a negative minimum value first, and then searching a positive maximum value in the width field block.
The MaxMinTh adopts an adaptive threshold, the threshold is adaptively adjusted according to the background brightness near the maximum value and the position information of the background brightness, and the calculation mode of the threshold MaxMinTh is as follows:
first, a basic threshold baseTh of gradient is set, since a lane line is inclined to be vertical in the middle of an image, the lane line is inclined to be more inclined to two sides of the image, after pixel values are partitioned and summed at different positions, the size and the position of the sum of the pixel values are related, and the sum is larger towards the middle and smaller towards the two sides. Therefore, the scale coefficient locateRate needs to be set at different positions of the image, and the basic gradient threshold baseTh is adjusted in proportion in different position areas. And calculating the brightness mean value of m blocks in the field of each found extreme point as a background value bkgrd of the extreme point, and comparing the bkgrd with a preset background maximum value MaxBkgrdTth.
When bkgrd is less than or equal to MaxBkgrdTth:
MaxMinTh=baseTh*locateRate+bkgrd*LumaRate (3)
when bkgrd is greater than MaxBkgrdTh, the image may have illumination interference, so that the gradient change is reduced, so that the gradient amplitude threshold value, i.e. overTimes of the difference value, should be correspondingly reduced according to the difference bkgrd-MaxBkgrdTh between the background bkgrd at the extreme value and the background maximum MaxBkgrdTh:
for the gradient data shown in fig. 3, for example: the block coordinates of the extreme point pairs found finally are (40,42), (81,84), and the point coordinates (202, 52), (212,52), (407,52), (422,52) of the rough positioning can be obtained by mapping back to the original image. The coarse anchor points of the whole graph are shown in fig. 4.
4. The above operations are sequentially carried out on other strips of the image, and coarse positioning points P on the left side and the right side of the suspected lane in all the strips of the whole image are found outi(x,y),Pi+1(x,y)。
The method only carries out gradient calculation in the horizontal direction on the sum of pixel values of the blocks after the image is segmented into strips and blocks, then searches extreme points of the lane according to the limitation of the lane width, the adaptive adjustment of the gradient amplitude threshold value and the limitation of the direction to realize edge detection, fully utilizes the longitudinal arrangement trend of the lane and the direction information of gradient change of the left side and the right side of the lane, and greatly avoids a plurality of interference edges caused by adopting a Canny operator or a Sobel operator to detect the edges.
The lane candidate points are downsampled by dividing the strips, subsequent calculation is carried out on a few points after rough positioning, and a method that the acceleration is carried out by globally removing image blocks of the upper part and the left part and the right part is avoided, so that the method can be applied when data acquired by a camera does not contain a sky background, and all lane lines which are vertically distributed from the middle to the left side and the right side and slowly obliquely distributed towards the middle in the image can be detected.
< Fine localization >
And performing fine positioning on the coarse positioning point and reserving fine points, wherein the fine positioning comprises the following steps:
step 31, selecting a rough positioning point Pi(x, y) at said coarse localization point PiTaking M pixel lines up and down and xOffset pixel points left and right respectively by taking (x, y) as a center;
step 32, performing convolution operation on the coarse positioning point of each pixel row to obtain a pixel extreme point of each pixel row;
step 33, averaging X1 the abscissa of the plurality of pixel extreme points in the upper M pixel rows, and averaging X2 the abscissa of the plurality of pixel extreme points in the lower M pixel rows;
step 34, judging whether the absolute value of the difference value between each pixel extreme point in the M pixel rows and the average value X1 is greater than a preset value, if so, discarding the pixel extreme points, and if not, reserving the pixel extreme points;
step 35, respectively judging whether the absolute value of the difference between each pixel extreme point in the next M pixel rows and the average value X2 is greater than a preset value, if so, discarding the pixel extreme points, and if not, retaining the pixel extreme points;
step 36, sequentially executing steps 31 to 35 on all the rough positioning points;
and step 37, converting the retained coordinates of the plurality of pixel extreme points into fine point coordinates.
And obtaining the fine point position of the lane edge after the coarse positioning point is subjected to fine positioning, eliminating interference points which do not accord with the longitudinal continuous compact arrangement rule of the lane during fine positioning, and simultaneously storing the fine edge point coordinate information of each strip and the slope information of the edge. The method comprises the following specific steps:
1. for P that has been detectedi(x, y) for PiAnd searching gradient extreme points in the xOffset column neighborhoods of the upper and lower m rows of (x, y). The x initial coordinate for each row is set to P for this coarse positioning of the stripiX coordinate of (x, y) at PiWithin the xOffset range of (x, y), the difference is made by the sum of the front and back 4-point or 5-point pixel values, i.e. the image data and the template [ -1, -1, -1, -1,0,1,1]Or [ -1, -1, -1, -1, -1,0,1,1,1,1, 1]Convolution is carried out, and the point with the maximum gradient change in the xOffset range is found as the extreme value coordinate P of the lineim(x, y). Sequentially obtaining the extreme points P of m upper and lower lines, each lineim(x,y)。
In the embodiment of the present invention, preferable values of m are: when 5 by 5 blocks, 3 is generally taken, and when 10 by 10 blocks, 5 is generally taken; xOffset is typically based on PiThe x coordinate of (x, y) determines that 30 is taken if the coarse positioning point is located in front 1/3 or rear 1/3 of the image and 15 is taken if the coarse positioning point is located in the middle 1/3.
2. Respectively corresponding to the extreme points P of the m rows on the upper and lower surfacesim(x, y) averaging the x coordinatesThe values X1 and X2, and the extreme points P of the upper and lower m rowsimThe X-coordinate of (X, y) is compared to the mean X-coordinates X1 and X2, respectively, and the point remaining within xTh pixels of the mean coordinate is preferably xTh taken as 3, i.e.: and calculating the difference between each pixel extreme point and the corresponding average coordinate value, and taking the absolute value of the difference to determine the distance between the pixel extreme point and the average coordinate value so as to determine whether to retain the pixel extreme point.
The number of dots retained in the upper M pixel rows is counted as N1, and the number of dots retained in the lower M pixel rows is counted as N2.
3. When N1 and N2 are larger than a preset value PnTh, respectively solving the sum sumx1 and sumx2 of the x coordinates of the upper part and the lower part of the rest points, namely summing the x coordinates of all the pixel extreme points reserved in the upper M pixel rows, and summing the x coordinates of all the pixel extreme points reserved in the lower M pixel rows; and then, calculating the respective mean values avg1 and avg 2: avg1 sumx1/N1 and avg2 sumx 2/N2.
It should be noted that, when M is 5, the preset value PnTh is 2, and when M is 10, the preset value PnTh is 3.
4. Using the mean value of the retained points, the rough locating point P of the strip is determined according to the following formulaimThe x coordinate of (x, y) becomes a fine point PiaThe x-coordinate of (x, y);
Pia.x=Pim.x+(sumx1+sumx2)/(N1+N2)。
5. the fine point positions of all the coarse positioning points are obtained, as shown in fig. 5.
< Classification Point >
Because the gradient changes of the left edge and the right edge of the lane are opposite, and the extreme gradient points appear in pairs in the lane width field, in the embodiment of the invention, each single fine point can be classified, and all the reserved paired fine points can also be classified, whether the slope is in a preset range is judged by calculating the slope of the fine points, and when the slopes meet the limit of the maximum value and the minimum value of the slope, the fine points are classified into the same straight line class, namely, the points on the same straight line. The classification method comprises the following steps:
step 41, taking the ith point in the plurality of fine points as a reference point, and determining a first straight line type according to the reference point, wherein i is 1,2,3 or 4 … …;
step 42, comparing the (i + 1) th fine point with the reference point;
if the (i + 1) th fine point is in the range determined by the reference point, recording the (i + 1) th fine point in the first straight line class, and taking the (i + 1) th fine point as a new reference point of the first straight line class;
if the (i + 1) th fine point is not in the range determined by the reference point, adding a new straight line class, and taking the (i + 1) th fine point as the reference point of the new straight line class;
and 43, comparing the plurality of non-classified fine points with the reference points of the first straight line class and the reference points of the newly added straight line class respectively until the classification of the plurality of fine points is finished.
Wherein step 42 comprises:
step 421, judging whether the Y value of the (i + 1) th fine point coordinate is equal to the Y value of the reference point coordinate;
if the number of the fine points is equal to the number of the fine points, the (i + 1) th fine point is not in the range determined by the reference point;
if not, go to step 422;
step 422, judging whether the slope of the (i + 1) th fine point is within a preset slope range;
if the number of the fine points is within the preset slope range, the (i + 1) th fine point is within the range determined by the reference point;
if the number of the fine points is not in the preset slope range, the (i + 1) th fine point is not in the range determined by the reference point.
Specifically, the paired fine point classification is taken as an example:
1. in the embodiment of the invention, a first pair of fine points P is used0a(x, y) is a reference point from the second pair of fine points Pia(x,y),P(i+1)a(x, y) starting with the first step, when the y coordinates of the second pair of fine reference points and the first pair of reference points are judged not to be equal to each other and the difference between the y coordinates is less than the threshold value yTh, they are describedNot in the same band and within the threshold range of the longitudinal comparison, then it calculates the slope information of such fine point pairs as stored: pia(x, y) and P0a(x,y)、P(i+1)a(x, y) and P1aSlope information k of (x, y) pointi=(Pia.x-P0a.x)/(Pia.y-P0a.y),ki+1=(P(i+1)a.x-P1a.x)/(P(i+1)a.y-P1a.y);
When the y coordinate is equal, the point is stored as a reference point of a new straight line class, and then the next pair of fine points is selected for the above operation.
In this embodiment, the threshold yTh is 5.
2. Slope information k for the calculated ith pair of fine pointsi,ki+1Obtaining the slope average value Kavg of the corresponding fine points through the data of the slope information configuration table in different preset image position areas, and when MinRate Kavg is less than or equal to KiNot more than MaxRate Kavg and MinRate Kavg not more than Ki+1And when the distance between the fine point and the corresponding datum point is less than or equal to MaxRate Kavg, the fine point and the corresponding datum point belong to points on the same straight line, the fine point and the corresponding datum point are classified into respective straight line classes, and the fine point pair replaces the original datum point pair to become a new datum point on the respective straight line class.
The subsequent fine points are compared with the new reference points on the straight line class to judge whether the subsequent fine points are the points on the straight line class.
If the slope of the fine point is judged not to be in the slope range, the point is stored as a reference point of a new straight line, and then the next fine point is selected for the operation.
Typically, the minimum slope coefficient MinRate takes 0.8 and the maximum slope coefficient MaxRate takes 1.2.
The classification process is carried out on all the fine points, and finally the retained fine points respectively form a plurality of straight line classes.
The process of classifying each fine-point pair is the same as the process of classifying each fine-point individually. In the embodiment of the invention, after classifying all the fine points, screening a plurality of straight line classes to determine a left straight line class and a right straight line class of the lane line;
1. after all the fine points are classified, each divided straight line class is screened by using the point number Pmnum, the straight line class with the point number larger than a preset value is indicated to be representative and is reserved, and when the number of the points in a certain straight line class is particularly small and smaller than the preset value, the straight line class is indicated to be not representative and is discarded, and usually, the preset value is 4.
For the remaining line classes, the slope Km is calculated.
2. The point number Pmnum of different straight lines is adjusted by utilizing the slope of each straight line class, so that the point number weight of the straight line positioned in the middle area of the image is reduced, and the point number weight of the straight line positioned on the left side and the right side and inclined is increased. The adjustment formula is as follows:
PmNum2=PmNum/(moffset-poffset*Km) (5)
typically, the moffset is taken 1500, the poffset is taken 2; then step 5 is executed;
3. as shown in fig. 6, when the extension line of the straight line calculated by the slope Km is smaller than 0 or larger than the image width for the selected plural straight line classes, indicating that the straight line represented by the straight line class belongs to an inclined straight line in the image, that is, is located in the left and right regions, two straight line classes having the largest number of points among the straight line classes having the larger number of points than the threshold BandNumLR are selected as the left and right sides of the final lane line; otherwise, the lane line is located in the middle area of the image and is discarded.
In the embodiment of the invention, the Hough transformation is not adopted to calculate the linear parameters during linear detection, but the slope information of the left side and the right side of the lane is fully utilized to be compared with the preset information, and the screened fine points are distributed into different linear classes by comparing the preset information, so that the problem of time consumption of Hough detection parameter calculation is essentially overcome, the arithmetic operation speed is greatly improved, and the real-time lane line detection can be realized on an embedded system.
< straight line display >
Dividing the points on the left straight line into an upper part and a lower part according to the number of the pictures, and respectively calculating the average coordinate (x)Upper left of,yUpper left of)、(xLeft lower part,yLeft lower part);
Dividing the points on the right straight line into an upper part and a lower part according to the number of the points, and respectively calculating the average coordinate (x)Upper right part,yUpper right part)、(xLower right,yLower right);
Calculating the intermediate coordinate xOn the upper part=(xUpper left of+xUpper right part)/2,yOn the upper part=(yUpper left of+yUpper right part)/2;xLower part=(xLeft lower part+xLower right)/2,yLower part=(yLeft lower part+yLower right)/2;
Connecting the intermediate coordinates (x) as shown in FIG. 7On the upper part,yOn the upper part) And (x)Lower part,yLower part) And obtaining the lane line and finishing the detection of the lane line.
In summary, the specific examples are applied to describe the implementation of the lane line detection method provided in the embodiments of the present invention, and the description of the above embodiments is only used to help understanding the scheme and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be subject to the appended claims.