CN105740805A - Lane line detection method based on multi-region joint - Google Patents

Lane line detection method based on multi-region joint Download PDF

Info

Publication number
CN105740805A
CN105740805A CN201610057089.4A CN201610057089A CN105740805A CN 105740805 A CN105740805 A CN 105740805A CN 201610057089 A CN201610057089 A CN 201610057089A CN 105740805 A CN105740805 A CN 105740805A
Authority
CN
China
Prior art keywords
image
prime
edge
lane line
perspective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610057089.4A
Other languages
Chinese (zh)
Other versions
CN105740805B (en
Inventor
田雨农
范玉涛
周秀田
于维双
陆振波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Yun de Xingye Technology Co.,Ltd.
Original Assignee
Dalian Roiland Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Roiland Technology Co Ltd filed Critical Dalian Roiland Technology Co Ltd
Priority to CN201610057089.4A priority Critical patent/CN105740805B/en
Publication of CN105740805A publication Critical patent/CN105740805A/en
Application granted granted Critical
Publication of CN105740805B publication Critical patent/CN105740805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a lane line detection method based on multi-region joint, comprising following steps: marking a lane line in an image captured by a camera so as to realize perspective transformation of a target region; performing binarization on the region after perspective transformation and performing edge extraction; performing an additional buffer region division on the image after edge extraction to obtain multiple edge images; and acquiring lane line information from the edge images. Through changing the direction of detection via perspective transformation, the method in the present invention can increase characteristics of the lane line and provide better characteristics for following detection and identification so as to improve the accuracy for the identification of the lane line. By using a policy of multi-region joint detection, the present invention proposes a multi-region joint detection and overlapping detection method regarding a problem of evaluating a maximum elapsed time of a voting process during the Hough transformation.

Description

A kind of based on multizone associating method for detecting lane lines
Technical field
The present invention relates to a kind of method for detecting lane lines, specifically a kind of based on multizone associating method for detecting lane lines.
Background technology
Automatic driving technology includes video frequency pick-up head, radar sensor and airborne laser range finder and understands the traffic of surrounding, and by a detailed map (by the map that manned automobile gathers), the road in front is navigated.Everything is all realized by the data center of Google, and the data center of Google can process automobile and be collected about the bulk information of surrounding terrain.On this point, autonomous driving vehicle is equivalent to remote-control car or the intelligent automobile of data center of Google.One of automatic driving technology technology of Internet of things application.
At present, existing automatic Pilot generally adopts Hough transformation identification lane line, it is achieved the real-time road detection of automatic Pilot.But, Hough transformation is when to lane detection, and the ballot table of structure is excessively huge, calculates maximum consuming time seriously every time, and the detection for a plurality of lane line generally requires substantial increase peak response bar number, this inconvenience brought to practical application.The present invention is directed to the lane detection technology in the field of image recognition in automatic Pilot, it is proposed to a kind of new method for detecting lane lines.
Summary of the invention
For solving above-mentioned technical problem, the present invention proposes a kind of based on multizone associating method for detecting lane lines.
The technical solution used in the present invention is as follows: a kind of based on multizone associating method for detecting lane lines, comprises the following steps:
Lane line in the image of camera collection is demarcated, it is achieved the perspective transform of target area;
Region after perspective transform is carried out binaryzation, and carries out edge extracting;
Image after edge extracting is carried out extra buffer segmentation, obtains several edge images;
Lane line information is obtained from several edge images.
In the described image to camera collection, lane line carries out demarcation and comprises the following steps:
Using inside two adjacent lane lines 44 points as demarcation in the image of camera collection;
With upper left point for initial point, level is X positive direction to the right, sets up rectangular coordinate system for Y positive direction straight down;
4 points according to demarcating obtain perspective at the coordinate of rectangular coordinate system;
By all regions carry out perspective transform according to the perspective in respective region a little respectively.
Described perspective is obtained by following formula
x 1 ′ y 1 ′ . . . x n ′ y n ′ = x 1 y 1 1 0 0 0 - x 1 ′ x 1 - x 1 ′ y 1 0 0 0 x 1 y 1 1 - y 1 ′ x 1 - y 1 ′ y 1 . . . x n y n 1 0 0 0 - x n ′ x n - x n ′ y n 0 0 0 x n y n 1 - y n ′ x n - y n ′ y n m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8
Wherein, m1~m8For perspective;xi、yiIt is 4 somes coordinates in rectangular coordinate system, xi’、yi' it is the coordinates in rectangular coordinate system of 4 points after perspective;I=1...n, n=4.
Described by all regions carry out perspective transform according to the perspective in respective region a little respectively and realized by below equation:
u v w = m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8 1 x ′ y ′ 1
Wherein, u, w, v be after perspective transform arbitrfary point at the coordinate of rectangular coordinate system, x ', y ' for camera collection to image gray-scale map on arbitrfary point at the coordinate of rectangular coordinate system;Perspective matrix includes m1~m8With 1.
Described binaryzation that region after perspective transform is carried out, and carry out edge extracting and comprise the following steps:
Set the size of two wave filter;
It is filtered two wave filter the region after perspective transform respectively obtaining two width images;Two width images are done difference, obtains the binary image of target characteristic;
By binary image from the leftmost side, the i-th row pixel value being deducted i+1 row pixel value, the difference obtained is as i+1 row pixel value;Now obtain target left hand edge image;
By binary image from the rightmost side, the i-th row pixel value being deducted the i-th-1 row pixel value, the difference obtained is as the i-th-1 row pixel value;Now obtain target right hand edge image;I=1...w;
Target left hand edge image starts a query at target left hand edge from the upper left corner;Target right hand edge image starts a query at target right hand edge from the upper left corner;When target left hand edge and target right hand edge all inquire, then target left hand edge position and target right edge position are averaged, as the target's center's line extracted.
Described image after edge extracting is carried out extra buffer segmentation, obtains several edge images and comprise the following steps:
Image after edge extracting is vertically divided into k region, k=(setting number of lines/2, track)+1;
The segmenting edge in each region is extended c pixel picture traverse × threshold value as relief area, after c=edge extracting;Obtain several edge images.
The described lane line information that obtains from several edge images comprises the following steps:
Each image is carried out Hough transformation, obtains Hough radius and Hough angle;
Hough radius is voted, takes the maximum front some groups of Hough radiuses of votes and Hough angle, be lane line information.
Described obtain after lane line information from several edge images, by the lane line information integration of several edge images, obtain a plurality of lane line, and map back original image.
The invention have the advantages that and advantage:
1. the present invention changes the direction of detection by perspective transform, it is possible to increase the feature of lane line, provides better feature for follow-up detection with identifying, the final accuracy rate improving Lane detection.
2. the present invention adopts the strategy of the joint-detection of multizone, for the maximizing problem consuming time of the voting process in Hough transform process, it is proposed to the joint-detection of a kind of multizone and overlapping detection method.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the present invention;
Fig. 2 is the method original image of the present invention;
Fig. 3 is the method fluoroscopy images of the present invention;
Fig. 4 is the method binary image of the present invention;
Fig. 5 is the method edge-detected image of the present invention;
Fig. 6 is the method Hough transform result images (being divided into example with left part in scheming) of the present invention;
Fig. 7 is the method final result image of the present invention.
Detailed description of the invention
Below in conjunction with embodiment, the present invention is described in further detail.
The present invention firstly for vehicle front image as in figure 2 it is shown, carry out perspective transform, as shown in Figure 3;Image for perspective transform carries out binary conversion treatment, as shown in Figure 4;Rim detection is carried out for binaryzation result, as shown in Figure 5;Result figure for rim detection carries out Hough transform detection of straight lines, as shown in Figure 6;Straight line for obtaining maps back original image as display, as shown in Figure 7.
As it is shown in figure 1, the present invention comprises the following steps:
1. carry out demarcation process using the piece image of video as sample, 4 angle points of artificial selected 4 any rectangles are as 4 characteristic points (selecting 4, the inner side point of two adjacent empty lane lines as characteristic point) in the present invention, for the coordinate selection of 4 points of mapping on image with image upper left point for initial point, it it is to the right X positive direction, is downwards Y positive direction, in the position of image-region lower middle.
2. solve perspective according to following equations:
x 1 ′ y 1 ′ . . . x n ′ y n ′ = x 1 y 1 1 0 0 0 - x 1 ′ x 1 - x 1 ′ y 1 0 0 0 x 1 y 1 1 - y 1 ′ x 1 - y 1 ′ y 1 . . . x n y n 1 0 0 0 - x n ′ x n - x n ′ y n 0 0 0 x n y n 1 - y n ′ x n - y n ′ y n m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8
Wherein, m1,…,m8For perspective (vector), xi,yiFor original coordinates, xi’,yi' it is the coordinate after perspective, i=1 ..., 4.N=4.
3. whole region is converted according to this perspective, namely
u v w = m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8 1 x ′ y ′ 1
Wherein, u, v, w is coordinate after perspective, and x ', y ' are original coordinates.Metzler matrix is perspective matrix, and its element is m1,…,m8,1。
The result of the perspective view finally given is as shown in Figure 3.
4. each region after pair perspective transform carries out binaryzation, adopts double; two scale filter, and carries out edge extracting and comprise the following steps:
Set the size 101*101 of the first wave filter 3*3 and the second wave filter;
It is filtered obtaining two width images to the region after perspective transform by the first wave filter and the second wave filter respectively;The pixel value of two width images is done difference, obtains the binary image of target characteristic;
Target left hand edge image begins stepping through entire image from the upper left corner;When traversal detects that current pixel value is 0, when next pixel value is 1, the column label of record current pixel, continues traversal, when detecting that currentElement value is 1, when next element value is 0, again record the column information of currentElement, this column label is calculated meansigma methods with column label before, row by meansigma methods place, the element value of this row is assigned to 1, and other all of values are 0, this results in edge image.Marginal information in image then represents lane line information.
5. for the edge result figure demand according to lane detection, the demand of the present invention is totally 4 lane lines that detection is current and left and right is adjacent, image is divided into 3 each regions, three, left, center, right part (such as Fig. 6), in the many extensions in the edge of each part 20 each pixel (picture traverse × threshold value after edge extracting, threshold value is 0.05) as relief area, when so effectively can solve sidewalk, image splits the problem that the analog value that Lane detection is caused is too small.
6. utilize Hough transformation detection lane line according to the marginal information obtained
By the edge graph obtained, marginal point in image being carried out Hough transform, coordinate x, y for all of marginal point bring equation below calculating r into:
R=x × sin θ+y × cos θ
Wherein, θ set point is between 0 to π, takes a value every 0.03 and is calculated, and all radius r carry out ballot statistics, and front 20 corresponding r and θ that poll is the highest are exactly the lane line of detection.
7. the lane line spacing parameter according to perspective carries out lane line filtration
Selected YpEqual to image centre position, it is sized to the half of picture altitude, brings Hough transform formula into and calculate Xp:
x p = r - y p × c o s θ sin θ
Wherein, XpExpression vertical coordinate is YpTime each lane line abscissa, YpFor the half of picture altitude, r is Hough radius, and θ is Hough angle;
By multiple XpSequence, from minima, as certain adjacent two XpBetween interval less than 3/4 lane line width time, delete the two XpIn higher value, remaining for the lane line that detects.
8. associating multizone lane line information
The lane line information in 3 regions obtained being integrated, namely the information (Hough angle and Hough radius) of multiple lane lines constitutes Hough angular moment battle array and Hough half drive matrix, and maps back original image as final result.

Claims (8)

1. combine method for detecting lane lines based on multizone for one kind, it is characterised in that comprise the following steps:
Lane line in the image of camera collection is demarcated, it is achieved the perspective transform of target area;
Region after perspective transform is carried out binaryzation, and carries out edge extracting;
Image after edge extracting is carried out extra buffer segmentation, obtains several edge images;
Lane line information is obtained from several edge images.
2. one according to claim 1 combines method for detecting lane lines based on multizone, it is characterised in that in the described image to camera collection, lane line carries out demarcation and comprises the following steps:
Using inside two adjacent lane lines 44 points as demarcation in the image of camera collection;
With upper left point for initial point, level is X positive direction to the right, sets up rectangular coordinate system for Y positive direction straight down;
4 points according to demarcating obtain perspective at the coordinate of rectangular coordinate system;
By all regions carry out perspective transform according to the perspective in respective region a little respectively.
3. one according to claim 2 combines method for detecting lane lines based on multizone, it is characterised in that described perspective is obtained by following formula
x 1 ′ y 1 ′ . . . x n ′ y n ′ = x 1 y 1 1 0 0 0 - x 1 ′ x 1 - x 1 ′ y 1 0 0 0 x 1 y 1 1 - y 1 ′ x 1 - y 1 ′ y 1 . . . x n y n 1 0 0 0 - x n ′ x n - x n ′ y n 0 0 0 x n y n 1 - y n ′ x n - y n ′ y n m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8
Wherein, m1~m8For perspective;xi、yiIt is 4 somes coordinates in rectangular coordinate system, xi’、yi' it is the coordinates in rectangular coordinate system of 4 points after perspective;I=1...n, n=4.
4. one according to claim 2 combines method for detecting lane lines based on multizone, it is characterised in that described by all regions carry out perspective transform according to the perspective in respective region a little respectively and realized by below equation:
u v w = m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8 1 x ′ y ′ 1
Wherein, u, w, v are that after perspective transform, arbitrfary point is at the coordinate of rectangular coordinate system, and the arbitrfary point of the gray-scale map of the image that x ', y ' arrive for camera collection is at the coordinate of rectangular coordinate system;m1~m8, 1 be the element of perspective matrix M.
5. one according to claim 1 combines method for detecting lane lines based on multizone, it is characterised in that described binaryzation that region after perspective transform is carried out, and carries out edge extracting and comprise the following steps:
Set the size of two wave filter;
It is filtered two wave filter the region after perspective transform respectively obtaining two width images;Two width images are done difference, obtains the binary image of target characteristic;
By binary image from the leftmost side, the i-th row pixel value being deducted i+1 row pixel value, the difference obtained is as i+1 row pixel value;Now obtain target left hand edge image;
By binary image from the rightmost side, the i-th row pixel value being deducted the i-th-1 row pixel value, the difference obtained is as the i-th-1 row pixel value;Now obtain target right hand edge image;I=1...w;
Target left hand edge image starts a query at target left hand edge from the upper left corner;Target right hand edge image starts a query at target right hand edge from the upper left corner;When target left hand edge and target right hand edge all inquire, then target left hand edge position and target right edge position are averaged, as the target's center's line extracted.
6. one according to claim 1 combines method for detecting lane lines based on multizone, it is characterised in that described image after edge extracting is carried out extra buffer segmentation, obtains several edge images and comprises the following steps:
Image after edge extracting is vertically divided into k region, k=(setting number of lines/2, track)+1;
The segmenting edge in each region is extended c pixel as relief area, obtain several edge images;Wherein, the picture traverse × threshold value after c=edge extracting.
7. one according to claim 1 combines method for detecting lane lines based on multizone, it is characterised in that the described lane line information that obtains from several edge images comprises the following steps:
Each image is carried out Hough transformation, obtains Hough radius and Hough angle;
Hough radius is voted, takes the maximum front some groups of Hough radiuses of votes and Hough angle, be lane line information.
8. one according to claim 1 combines method for detecting lane lines based on multizone, after obtaining lane line information from several edge images described in it is characterized in that, by the lane line information integration of several edge images, obtain a plurality of lane line, and map back original image.
CN201610057089.4A 2016-01-27 2016-01-27 One kind combining method for detecting lane lines based on multizone Active CN105740805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610057089.4A CN105740805B (en) 2016-01-27 2016-01-27 One kind combining method for detecting lane lines based on multizone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610057089.4A CN105740805B (en) 2016-01-27 2016-01-27 One kind combining method for detecting lane lines based on multizone

Publications (2)

Publication Number Publication Date
CN105740805A true CN105740805A (en) 2016-07-06
CN105740805B CN105740805B (en) 2019-06-07

Family

ID=56247750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610057089.4A Active CN105740805B (en) 2016-01-27 2016-01-27 One kind combining method for detecting lane lines based on multizone

Country Status (1)

Country Link
CN (1) CN105740805B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN111428538A (en) * 2019-01-09 2020-07-17 阿里巴巴集团控股有限公司 Lane line extraction method, device and equipment
CN113505747A (en) * 2021-07-27 2021-10-15 浙江大华技术股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008034304A1 (en) * 2007-07-24 2009-04-16 Nec Electronics Corp., Kawasaki Built-in image processing device for vehicles
CN101469991A (en) * 2007-12-26 2009-07-01 南京理工大学 All-day structured road multi-lane line detection method
CN102629326A (en) * 2012-03-19 2012-08-08 天津工业大学 Lane line detection method based on monocular vision
CN103226817A (en) * 2013-04-12 2013-07-31 武汉大学 Superficial venous image augmented reality method and device based on perspective projection
CN103488975A (en) * 2013-09-17 2014-01-01 北京联合大学 Zebra crossing real-time detection method based in intelligent driving

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008034304A1 (en) * 2007-07-24 2009-04-16 Nec Electronics Corp., Kawasaki Built-in image processing device for vehicles
CN101469991A (en) * 2007-12-26 2009-07-01 南京理工大学 All-day structured road multi-lane line detection method
CN102629326A (en) * 2012-03-19 2012-08-08 天津工业大学 Lane line detection method based on monocular vision
CN103226817A (en) * 2013-04-12 2013-07-31 武汉大学 Superficial venous image augmented reality method and device based on perspective projection
CN103488975A (en) * 2013-09-17 2014-01-01 北京联合大学 Zebra crossing real-time detection method based in intelligent driving

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
代勤等: ""基于改进Hough变换和透视变换的透视图像矫正"", 《液晶与显示》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN108734105B (en) * 2018-04-20 2020-12-04 东软集团股份有限公司 Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN111428538A (en) * 2019-01-09 2020-07-17 阿里巴巴集团控股有限公司 Lane line extraction method, device and equipment
CN113505747A (en) * 2021-07-27 2021-10-15 浙江大华技术股份有限公司 Lane line recognition method and apparatus, storage medium, and electronic device

Also Published As

Publication number Publication date
CN105740805B (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN103500322B (en) Automatic lane line identification method based on low latitude Aerial Images
DE102013205950B4 (en) Roadside detection method
CN105678285B (en) A kind of adaptive road birds-eye view transform method and road track detection method
DE102015203016B4 (en) Method and device for optical self-localization of a motor vehicle in an environment
DE102009050505A1 (en) Clear path detecting method for vehicle i.e. motor vehicle such as car, involves modifying clear path based upon analysis of road geometry data, and utilizing clear path in navigation of vehicle
CN105740809A (en) Expressway lane line detection method based on onboard camera
CN105488485B (en) Lane line extraction method based on track of vehicle
CN106250816A (en) A kind of Lane detection method and system based on dual camera
DE102009050492A1 (en) Travel's clear path detection method for motor vehicle i.e. car, involves monitoring images, each comprising set of pixels, utilizing texture-less processing scheme to analyze images, and determining clear path based on clear surface
DE102009048699A1 (en) Travel's clear path detection method for motor vehicle i.e. car, involves monitoring images, each comprising set of pixels, utilizing texture-less processing scheme to analyze images, and determining clear path based on clear surface
DE102009048892A1 (en) Clear traveling path detecting method for vehicle e.g. car, involves generating three-dimensional map of features in view based upon preferential set of matched pairs, and determining clear traveling path based upon features
CN105740782A (en) Monocular vision based driver lane-changing process quantization method
CN109190483B (en) Lane line detection method based on vision
CN103731652A (en) Movement surface line recognition apparatus, movement surface line recognition method and movement member equipment control system
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN107389084A (en) Planning driving path planing method and storage medium
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN114399748A (en) Agricultural machinery real-time path correction method based on visual lane detection
CN105740805A (en) Lane line detection method based on multi-region joint
CN116503818A (en) Multi-lane vehicle speed detection method and system
CN112950972A (en) Parking lot map construction method, device, equipment and medium
CN108154114B (en) Lane line detection method
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211130

Address after: 116023 room 226, floor 2, No. 12, Renxian street, Qixianling, Lingshui Town, Ganjingzi District, Dalian City, Liaoning Province

Patentee after: Dalian Yun de Xingye Technology Co.,Ltd.

Address before: 116023 floor 11, No. 7, Huixian Park, high tech Industrial Park, Dalian, Liaoning Province

Patentee before: DALIAN ROILAND TECHNOLOGY Co.,Ltd.