CN107229931A - A kind of high Unmanned Systems of automaticity - Google Patents

A kind of high Unmanned Systems of automaticity Download PDF

Info

Publication number
CN107229931A
CN107229931A CN201710400385.4A CN201710400385A CN107229931A CN 107229931 A CN107229931 A CN 107229931A CN 201710400385 A CN201710400385 A CN 201710400385A CN 107229931 A CN107229931 A CN 107229931A
Authority
CN
China
Prior art keywords
mrow
msup
image
mfrac
subsystem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710400385.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen City Creative Industry Technology Co Ltd
Original Assignee
Shenzhen City Creative Industry Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen City Creative Industry Technology Co Ltd filed Critical Shenzhen City Creative Industry Technology Co Ltd
Priority to CN201710400385.4A priority Critical patent/CN107229931A/en
Publication of CN107229931A publication Critical patent/CN107229931A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention provides a kind of high Unmanned Systems of automaticity, including perceiving subsystem, task subsystem, decision-making subsystem, control subsystem and virtual reality subsystem, the perception subsystem is used to perceive vehicle drive environment, including panoramic shooting equipment and interesting image regions extraction element, the panoramic shooting equipment is used to obtain vehicle periphery panoramic information, described image region of interesting extraction device is used for the area-of-interest for obtaining surrounding environment, the task subsystem assigns a task according to vehicle drive environment, the decision-making subsystem is used to receive assigning for task, and make a policy, the control subsystem is used to the decision-making received being converted into the actual instruction for being controlled car, the virtual reality subsystem and the perception subsystem wireless connection, for showing vehicle drive environment information.Beneficial effects of the present invention are:There is provided a kind of high Unmanned Systems of automaticity.

Description

A kind of high Unmanned Systems of automaticity
Technical field
The present invention relates to unmanned technical field, and in particular to a kind of high Unmanned Systems of automaticity.
Background technology
With the development of artificial intelligence technology, automatic driving vehicle turns into the developing direction of future automobile, with security High, efficient easily advantage, helps to make up the defect of manned automobile, effectively reduces traffic accident.
Observer only can carry out selective analysis when watching image to the information in region interested in image, without The global information of image is all analyzed.The method of traditional graphical analysis is that the global information of image is analyzed mostly Processing, this does not meet the processing procedure to image information, and this global analysis's method adds point for many times wanting information Analysis and processing, cause the waste in many unnecessary calculating.
The content of the invention
In view of the above-mentioned problems, a kind of the present invention is intended to provide high Unmanned Systems of automaticity.
The purpose of the present invention is realized using following technical scheme:
There is provided a kind of high Unmanned Systems of automaticity, including perceive subsystem, task subsystem, decision-making System, control subsystem and virtual reality subsystem, the perception subsystem are used to perceive vehicle drive environment, including panorama is taken the photograph As equipment and interesting image regions extraction element, the panoramic shooting equipment is used to obtain vehicle periphery panoramic information, described Interesting image regions extraction element is used for the area-of-interest for obtaining surrounding environment, and the task subsystem is according to vehicle drive Environment assigns a task, and the decision-making subsystem is used to receive assigning for task, and makes a policy, and the control subsystem is used for will The decision-making received is converted into the actual instruction being controlled to car, the virtual reality subsystem and the perception subsystem without Line is connected, for showing vehicle drive environment information.
Beneficial effects of the present invention are:There is provided a kind of high Unmanned Systems of automaticity.
Brief description of the drawings
Using accompanying drawing, the invention will be further described, but the embodiment in accompanying drawing does not constitute any limit to the present invention System, for one of ordinary skill in the art, on the premise of not paying creative work, can also be obtained according to the following drawings Other accompanying drawings.
Fig. 1 is the structural representation of the present invention;
Reference:
Perceive subsystem 1, task subsystem 2, decision-making subsystem 3, control subsystem 4, virtual reality subsystem 5.
Embodiment
The invention will be further described with the following Examples.
Referring to Fig. 1, a kind of high Unmanned Systems of automaticity of the present embodiment, including perceive subsystem 1, task Subsystem 2, decision-making subsystem 3, control subsystem 4 and virtual reality subsystem 5, the perception subsystem 1 are used to perceive vehicle Driving environment, including panoramic shooting equipment and interesting image regions extraction element, the panoramic shooting equipment are used to obtain car Surrounding panoramic information, described image region of interesting extraction device is used for the area-of-interest for obtaining surrounding environment, described Business subsystem 2 assigns a task according to vehicle drive environment, and the decision-making subsystem 3 is used to receive assigning for task, and makes certainly Plan, the control subsystem 4 is used to the decision-making received being converted into the actual instruction for being controlled car, the virtual reality Subsystem 5 and the perception wireless connection of subsystem 1, for showing vehicle drive environment information.
Present embodiments provide a kind of high Unmanned Systems of automaticity.
It is preferred that, the virtual reality subsystem 5 includes communication module and panorama display end, and the communication module is used to feel Know that subsystem 1 transmits vehicle driving-environment information to virtual reality subsystem 5, the panorama display end is used to show vehicle drive Environmental information.
The virtual reality that this preferred embodiment realizes vehicle drive information is shown.
It is preferred that, after the reception task of decision-making subsystem 3, task reasonability is judged, if task rationally, is done Go out decision-making and be transmitted to control subsystem 4, if task is unreasonable, return to task subsystem 2.
This preferred embodiment improves the decision-making capability of decision-making subsystem.
It is preferred that, described image region of interesting extraction device includes eye and moves generation module, feature generation module and evaluation Module, the eye moves the first area-of-interest that generation module is used to obtain image, and the feature generation module, which is used to obtain, schemes Second area-of-interest of picture, the evaluation module is used to comment the second area-of-interest according to the first area-of-interest Valency.
First area-of-interest of described image is tested tester using eye tracker;
The feature generation module includes feature extraction unit, notable figure generation unit and area-of-interest generation unit, The feature extraction unit is used for the color characteristic and textural characteristics for extracting image, and the notable figure generation unit is used for according to figure The feature of picture generates the characteristic remarkable picture of image, and the area-of-interest generation unit is used to generate image according to characteristic remarkable picture The second area-of-interest.
The color characteristic is extracted in the following ways:
A, convert the image into HSV patterns;Color characteristic is extracted by following formula:
In formula, f (x, y) represents the color characteristic of image, and bhd (x, y) represents that image is located at the saturation of pixel (x, y) Degree, bhd represents image saturation average, and ld (x, y) represents that image is located at the brightness of pixel (x, y), and ld represents brightness of image Average;
B, the scope of pixel value is normalized to [0,255], obtains the color characteristic figure of image;
The textural characteristics are extracted in the following ways:
A, the textural characteristics on the yardstick 8 of image 5 direction are extracted using Gabor filter group, obtain 40 width of image Texture maps;
B, the 40 width texture maps to image are normalized, and then equal weight is superimposed, and obtains final textural characteristics Figure.
This preferred embodiment interesting image regions extraction element sets feature extraction unit to extract characteristics of image, It when being extracted to color of image feature, will can only reflect the rgb value of color characteristic, be converted to the tone of reflection multiple features, satisfy With degree and monochrome information, more accurate color characteristic figure is obtained, when being extracted to image texture characteristic, 40 width lines are chosen Reason figure is handled, and has obtained more careful texture information.
It is preferred that, the characteristic remarkable picture of the generation image is in the following ways:
A, according to color characteristic figure and textural characteristics figure, corresponding color notable figure is obtained using ITTI human perceptual models With texture notable figure;
B, the characteristic remarkable picture for determining using following formula image:
In formula, X represents the characteristic remarkable picture of image, and Y represents the color notable figure of image, and W represents that the texture of image is notable Figure;
The second area-of-interest of described image is generated in the following ways:The scope of pixel value is normalized to [0, 255], given threshold T, extracts the pixel that pixel value is more than T, obtains the second area-of-interest DE of image.
This preferred embodiment is based on characteristics of the underlying image and extracts area-of-interest, reflects that image overall is special by color characteristic Levy, textural characteristics reflect the local feature of image, the characteristic remarkable picture of more accurate image are obtained, second obtained from Area-of-interest is more accurate.
It is preferred that, the evaluation module includes the first evaluation unit, the second evaluation unit, overall merit unit, described the One evaluation unit is once evaluated the second area-of-interest, obtains the first evaluation of estimate, second evaluation unit is to second Area-of-interest carries out second evaluation, obtains the second evaluation of estimate, and the overall merit unit is commented according to the first evaluation of estimate and second Value carries out overall merit to the second area-of-interest, obtains comprehensive evaluation value;
It is described that second area-of-interest is carried out once to evaluate using the progress of the first evaluation of estimate, the first evaluation of estimate P1Under Formula is calculated:
In formula, DY represents the first area-of-interest, deiAnd dyiThe second area-of-interest and the first region of interest are represented respectively The corresponding pixel value of domain ith pixel point, M represents the number of pixels included in image;
The second evaluation that carried out to the second area-of-interest is using the progress of the second evaluation of estimate, the second evaluation of estimate P2Under Formula is calculated:
In formula, w and h represent respectively be image width and height;
The overall merit that carried out to the second area-of-interest is using comprehensive evaluation value progress, comprehensive evaluation value PcUnder Formula is calculated:
Comprehensive evaluation value is bigger, shows that the second region of interesting extraction is more accurate.
This preferred embodiment interesting image regions extraction element using the first area-of-interest as evaluation criterion, by asking Take the second area-of-interest comprehensive evaluation value to reflect the accuracy and validity of the second area-of-interest, it is ensured that the second sense is emerging The accuracy in interesting region, improves the environment sensing performance of Unmanned Systems, so as to improve the safety of Unmanned Systems Property.
Using automaticity of the present invention, high Unmanned Systems carry out automatic Pilot, right when driving distance difference Drive safety and driving efficiency are counted, compared with other Unmanned Systems, and generation is had the beneficial effect that shown in table:
Drive distance/km Drive safety is improved Driving efficiency is improved
100 10% 18%
110 12% 23%
120 13% 25%
130 15% 28%
140 17% 32%
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than to present invention guarantor The limitation of scope is protected, although being explained with reference to preferred embodiment to the present invention, one of ordinary skill in the art should Work as understanding, technical scheme can be modified or equivalent substitution, without departing from the reality of technical solution of the present invention Matter and scope.

Claims (8)

1. the high Unmanned Systems of a kind of automaticity, it is characterised in that including perceiving subsystem, task subsystem, determining Plan subsystem, control subsystem and virtual reality subsystem, the perception subsystem are used to perceive vehicle drive environment, including complete Scape picture pick-up device and interesting image regions extraction element, the panoramic shooting equipment are used to obtain vehicle periphery panoramic information, Described image region of interesting extraction device is used for the area-of-interest for obtaining surrounding environment, and the task subsystem is according to vehicle Driving environment assigns a task, and the decision-making subsystem is used to receive assigning for task, and makes a policy, and the control subsystem is used In the instruction for being converted into actually being controlled car by the decision-making received, the virtual reality subsystem and the perception subsystem System wireless connection, for showing vehicle drive environment information.
2. the high Unmanned Systems of automaticity according to claim 1, it is characterised in that virtual reality System includes communication module and panorama display end, and the communication module is used to perceive subsystem to virtual reality subsystem transmission vehicle Driving-environment information, the panorama display end is used to show vehicle drive environment information.
3. the high Unmanned Systems of automaticity according to claim 2, it is characterised in that the decision-making subsystem After reception task, task reasonability is judged, if task rationally, makes a policy and is transmitted to control subsystem, if task is not Rationally, then task subsystem is returned.
4. the high Unmanned Systems of automaticity according to claim 3, it is characterised in that described image is interested Region extracting device includes eye and moves generation module, feature generation module and evaluation module, and the eye, which moves generation module, to be used to obtain First area-of-interest of image, the feature generation module is used for the second area-of-interest for obtaining image, the evaluation mould Block is used to evaluate the second area-of-interest according to the first area-of-interest.
5. the high Unmanned Systems of automaticity according to claim 4, it is characterised in that the first of described image Area-of-interest is tested tester using eye tracker;
The feature generation module includes feature extraction unit, notable figure generation unit and area-of-interest generation unit, described Feature extraction unit is used for the color characteristic and textural characteristics for extracting image, and the notable figure generation unit is used for according to image Feature generates the characteristic remarkable picture of image, and the area-of-interest generation unit is used to generate the of image according to characteristic remarkable picture Two area-of-interests.
6. the high Unmanned Systems of automaticity according to claim 5, it is characterised in that the color characteristic is adopted Extract with the following methods:
A, convert the image into HSV patterns;Color characteristic is extracted by following formula:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>4</mn> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mn>2</mn> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mrow> <mi>b</mi> <mi>h</mi> <mi>d</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <msup> <mi>bhd</mi> <mn>2</mn> </msup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> </mfrac> </mrow> </msup> <mo>)</mo> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mn>2</mn> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mrow> <mi>l</mi> <mi>d</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <msup> <mi>ld</mi> <mn>2</mn> </msup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> </mfrac> </mrow> </msup> <mo>)</mo> </mrow> </mfrac> </mrow>
In formula, f (x, y) represents the color characteristic of image, and bhd (x, y) represents that image is located at the saturation degree of pixel (x, y), bhd Image saturation average is represented, ld (x, y) represents that image is located at the brightness of pixel (x, y), and ld represents brightness of image average;
B, the scope of pixel value is normalized to [0,255], obtains the color characteristic figure of image;
The textural characteristics are extracted in the following ways:
A, the textural characteristics on the yardstick 8 of image 5 direction are extracted using Gabor filter group, obtain 40 width textures of image Figure;
B, the 40 width texture maps to image are normalized, and then equal weight is superimposed, and obtains final textural characteristics figure.
7. the high Unmanned Systems of automaticity according to claim 6, it is characterised in that the generation image Characteristic remarkable picture is in the following ways:
A, according to color characteristic figure and textural characteristics figure, corresponding color notable figure and line are obtained using ITTI human perceptual models Manage notable figure;
B, the characteristic remarkable picture for determining using following formula image:
<mrow> <mi>X</mi> <mo>=</mo> <mn>0.7</mn> <mrow> <mo>(</mo> <mfrac> <msup> <mi>Y</mi> <mn>3</mn> </msup> <mrow> <msup> <mi>Y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>W</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>+</mo> <mfrac> <msup> <mi>W</mi> <mn>3</mn> </msup> <mrow> <msup> <mi>Y</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>W</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>+</mo> <mn>0.3</mn> <mrow> <mo>(</mo> <mfrac> <msup> <mi>Y</mi> <mn>2</mn> </msup> <mrow> <mi>Y</mi> <mo>+</mo> <mi>W</mi> </mrow> </mfrac> <mo>+</mo> <mfrac> <msup> <mi>W</mi> <mn>2</mn> </msup> <mrow> <mi>Y</mi> <mo>+</mo> <mi>W</mi> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
In formula, X represents the characteristic remarkable picture of image, and Y represents the color notable figure of image, and W represents the texture notable figure of image;
The second area-of-interest of described image is generated in the following ways:The scope of pixel value is normalized to [0,255], if Determine threshold value T, extract the pixel that pixel value is more than T, obtain the second area-of-interest DE of image.
8. the high Unmanned Systems of automaticity according to claim 7, it is characterised in that the evaluation module bag The first evaluation unit, the second evaluation unit, overall merit unit are included, first evaluation unit is carried out to the second area-of-interest Once evaluate, obtain the first evaluation of estimate, second evaluation unit carries out second evaluation to the second area-of-interest, obtains second Evaluation of estimate, the overall merit unit to the second area-of-interest integrate commenting according to the first evaluation of estimate and the second evaluation of estimate Valency, obtains comprehensive evaluation value;
It is described that second area-of-interest is carried out once to evaluate using the progress of the first evaluation of estimate, the first evaluation of estimate P1Using following formula meter Calculate:
<mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mi>E</mi> <mo>,</mo> <mi>D</mi> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mfrac> <mrow> <mo>|</mo> <msub> <mi>de</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>dy</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> <mn>255</mn> </mfrac> </mrow> </msup> </mrow>
In formula, DY represents the first area-of-interest, deiAnd dyiThe second area-of-interest and the first area-of-interest are represented respectively The corresponding pixel value of i pixel, M represents the number of pixels included in image;
The second evaluation that carried out to the second area-of-interest is using the progress of the second evaluation of estimate, the second evaluation of estimate P2Using following formula meter Calculate:
<mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mi>E</mi> <mo>,</mo> <mi>D</mi> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mi>lg</mi> <mfrac> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mfrac> <mrow> <msub> <mi>de</mi> <mi>i</mi> </msub> </mrow> <mi>M</mi> </mfrac> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mfrac> <mrow> <msub> <mi>dy</mi> <mi>i</mi> </msub> </mrow> <mi>M</mi> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <msqrt> <mrow> <msup> <mi>w</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>h</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
In formula, w and h represent respectively be image width and height;
The overall merit that carried out to the second area-of-interest is using comprehensive evaluation value progress, comprehensive evaluation value PcUsing following formula meter Calculate:
<mrow> <msub> <mi>P</mi> <mi>c</mi> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mi>E</mi> <mo>,</mo> <mi>D</mi> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>&amp;times;</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>+</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> </mrow> <msup> <mi>e</mi> <mfrac> <mn>1</mn> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> </mrow> </mfrac> </msup> </mfrac> </mrow>
Comprehensive evaluation value is bigger, shows that the second region of interesting extraction is more accurate.
CN201710400385.4A 2017-05-31 2017-05-31 A kind of high Unmanned Systems of automaticity Pending CN107229931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710400385.4A CN107229931A (en) 2017-05-31 2017-05-31 A kind of high Unmanned Systems of automaticity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710400385.4A CN107229931A (en) 2017-05-31 2017-05-31 A kind of high Unmanned Systems of automaticity

Publications (1)

Publication Number Publication Date
CN107229931A true CN107229931A (en) 2017-10-03

Family

ID=59933606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710400385.4A Pending CN107229931A (en) 2017-05-31 2017-05-31 A kind of high Unmanned Systems of automaticity

Country Status (1)

Country Link
CN (1) CN107229931A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895147A (en) * 2017-11-07 2018-04-10 龚土婷 A kind of safe pilotless automobile system
CN108036787A (en) * 2017-12-07 2018-05-15 梁金凤 The accurate unmanned measurement car of one kind measurement
CN108098769A (en) * 2017-12-07 2018-06-01 梁金凤 A kind of robot for hazardous area measurement
CN108986481A (en) * 2018-07-17 2018-12-11 太仓远见科技咨询服务有限公司 A kind of increasingly automated vehicular traffic

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN105480224A (en) * 2014-09-19 2016-04-13 任丘市永基建筑安装工程有限公司 Unmanned vehicle system
CN106394545A (en) * 2016-10-09 2017-02-15 北京汽车集团有限公司 Driving system, unmanned vehicle and vehicle remote control terminal
CN206021074U (en) * 2016-06-14 2017-03-15 北京汽车研究总院有限公司 A kind of Unmanned Systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN105480224A (en) * 2014-09-19 2016-04-13 任丘市永基建筑安装工程有限公司 Unmanned vehicle system
CN206021074U (en) * 2016-06-14 2017-03-15 北京汽车研究总院有限公司 A kind of Unmanned Systems
CN106394545A (en) * 2016-10-09 2017-02-15 北京汽车集团有限公司 Driving system, unmanned vehicle and vehicle remote control terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895147A (en) * 2017-11-07 2018-04-10 龚土婷 A kind of safe pilotless automobile system
CN108036787A (en) * 2017-12-07 2018-05-15 梁金凤 The accurate unmanned measurement car of one kind measurement
CN108098769A (en) * 2017-12-07 2018-06-01 梁金凤 A kind of robot for hazardous area measurement
CN108986481A (en) * 2018-07-17 2018-12-11 太仓远见科技咨询服务有限公司 A kind of increasingly automated vehicular traffic

Similar Documents

Publication Publication Date Title
CN110501018B (en) Traffic sign information acquisition method for high-precision map production
CN112912920B (en) Point cloud data conversion method and system for 2D convolutional neural network
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN107229931A (en) A kind of high Unmanned Systems of automaticity
DE102020131323A1 (en) CAMERA-TO-LIDAR CALIBRATION AND VALIDATION
CN110738121A (en) front vehicle detection method and detection system
CN111209780A (en) Lane line attribute detection method and device, electronic device and readable storage medium
CN107084727A (en) A kind of vision positioning system and method based on high-precision three-dimensional map
CN109344804A (en) A kind of recognition methods of laser point cloud data, device, equipment and medium
US11455806B2 (en) System and method for free space estimation
CN105654732A (en) Road monitoring system and method based on depth image
DE102020200843A1 (en) Localization with neural network based on image registration of sensor data and map data
CN106803073B (en) Auxiliary driving system and method based on stereoscopic vision target
CN114419605B (en) Visual enhancement method and system based on multi-network vehicle-connected space alignment feature fusion
CN113029187A (en) Lane-level navigation method and system fusing ADAS fine perception data
CN110472508B (en) Lane line distance measurement method based on deep learning and binocular vision
CN117111085A (en) Automatic driving automobile road cloud fusion sensing method
CN116258756B (en) Self-supervision monocular depth estimation method and system
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN115618602A (en) Lane-level scene simulation method and system
CN114708565A (en) Intelligent driving scene recognition model creating method, device, equipment and storage medium
CN113221756A (en) Traffic sign detection method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171003