CN109374008A - A kind of image capturing system and method based on three mesh cameras - Google Patents

A kind of image capturing system and method based on three mesh cameras Download PDF

Info

Publication number
CN109374008A
CN109374008A CN201811392498.5A CN201811392498A CN109374008A CN 109374008 A CN109374008 A CN 109374008A CN 201811392498 A CN201811392498 A CN 201811392498A CN 109374008 A CN109374008 A CN 109374008A
Authority
CN
China
Prior art keywords
unit
information
image
road markings
binocular camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811392498.5A
Other languages
Chinese (zh)
Inventor
李志伟
薛周鹏
蔡锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deepmotion Technology Beijing Co Ltd
Original Assignee
Deepmotion Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deepmotion Technology Beijing Co Ltd filed Critical Deepmotion Technology Beijing Co Ltd
Priority to CN201811392498.5A priority Critical patent/CN109374008A/en
Publication of CN109374008A publication Critical patent/CN109374008A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

A kind of image-pickup method and system based on three mesh cameras, this method includes S10, based on the image of monocular cam unit (1) detection, obtains road markings identification information;S20, the image of the detection based on binocular camera shooting head unit (2), obtains the location information of road markings;S30 obtains the curvature, the gradient, obliquity information of road according to the measurement result of the image of monocular cam unit (1) detection and Inertial Measurement Unit;S40 generates high-precision map based on road markings information, the curvature of the location information of road markings and road, the gradient and obliquity information.The present invention uses three mesh cameras, Inertial Measurement Unit and GPS unit, may be implemented to obtain high-precision positioning with the sensor of lower cost by the fusion of vision, inertia and GPS, to realize the high-precision map data collecting of lower cost.

Description

A kind of image capturing system and method based on three mesh cameras
Technical field
The present invention relates to Image Acquisition fields, adopt more particularly to a kind of image of high-precision map for automatic Pilot Collecting system and method.
Background technique
Map has become the indispensable a part of vehicle daily trip, is mainly used for environment and checks and path navigation. But most common maps can only provide the geography information of road grade precision at present, and driver or control loop can not be from ground Present road lane information is learnt on figure and itself is in which lane, and the road relevant information that these maps include has Limit only includes generally part road signs information rough location information and road shape information, and precision is low, and information content is small, cannot Reflection roadway characteristic comprehensively.The striograph of common map uses satellite image map or Aerial Images, and image resolution ratio is lower, most It is only capable of reaching meter level, road surface characteristic cannot be differentiated, lane information and pavement marking can not be accurately presented, low resolution It is also the key factor for restricting the accuracy of map and being promoted.
With the research and development and application of advanced driving assistance system and automatic driving vehicle, computer intelligence is more introduced vehicle Drive among, it is different from human driver, computer dependence precise information could complete the various operations to vehicle, commonly Figure can not provide to be used with accurate data for computer in detail enough, and only fine map could meet needs, use height Fine map can effectively promote the performance of advanced driving assistance system and automatic driving vehicle.
There is also a variety of methods for fine cartography at present, such as using laser radar, laser radar acquires information essence Degree is high, and of overall importance good but with high costs, data volume is big, and generating image is albedo image, is had differences with real-world scene, Mode using shooting image is then low in cost, and use is relatively simple.
Summary of the invention
In view of the deficiencies of the prior art, the object of the present invention is to provide a kind of image informations based on three mesh camera units Acquisition system and method carry out extraction of semantics by monocular cam unit, obtain lane line, road surface identification, traffic lights, street lamp The road markings information such as bar, direction board carries out ranging by binocular camera, obtains camera at a distance from road markings, and The data such as GPS, camera posture are acquired simultaneously, and processor unit completes Data Fusion of Sensor and the ability according to main control chip Selection is in processing locality or passes to remote processor and further generates map.The present invention passes through three mesh cameras, inertia measurement The fusion of vision, inertia and GPS that unit and GPS unit carry out may be implemented to be obtained with the sensor of lower cost high-precision Positioning, to realize the high-precision map data collecting of lower cost.
The purpose of the present invention is to provide a kind of image capturing systems based on three mesh cameras, and specific technical solution is such as Under:
In a first aspect, the embodiment of the invention provides a kind of image capturing systems based on three mesh cameras, comprising:
Second aspect, the embodiment of the invention provides a kind of image-pickup methods based on three mesh cameras, comprising:
In the embodiment of the present invention, using three mesh cameras, Inertial Measurement Unit and GPS unit, by vision, inertia and The fusion of GPS may be implemented to obtain high-precision positioning with the sensor of lower cost, to realize the high-precision of lower cost Map data collecting.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill of field, without creative efforts, it can also be obtained according to these attached drawings others Attached drawing.
Fig. 1 is image capturing system structural block diagram of the present invention;
Typical environment schematic diagram when Fig. 2 is Image Acquisition of the present invention;
Fig. 3 is the flow chart of image-pickup method of the present invention;
Fig. 4 is the flow chart for the location information that the present invention obtains road markings.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
It is shown in FIG. 1 that the present invention is based on the schematic diagrames of the image capturing system of three mesh cameras.The system includes at least One acquisition device 110 and map creation device 120.Acquisition device 110 includes monocular cam unit 1, binocular camera list Member 2, Inertial Measurement Unit 3, GPS unit 4, Acquisition Processor unit 5 and data transmission unit 6.Map creation device 120 can wrap Data transmission unit 7, maps processing device unit 8 and memory 9 are included, saves map in the memory 9.
Wherein, which can be fixedly mounted in a motor vehicle in the present embodiment, naturally it is also possible to Using other fixed forms, such as concentrated setting is handheld device.The map creation device 120 may be designed as remote server or Vehicle end server.Communication can be used between data transmission unit 6 and data transmission unit 7, such as when map is raw When at device 120 being remote server, which can be the side wireless communications such as GPRS, 3G/4G, wifi, bluetooth, radio frequency Formula;When map creation device 120 is vehicle end server, can be between acquisition device 110 and map creation device 120 wired Connection, communication mode can be the wired connection modes such as bus, Ethernet.
Monocular cam unit 1 is connected to Acquisition Processor unit 5, for using the edge on monocular camera acquisition road Way image data, input of the image data as Acquisition Processor unit 5, for lane line, the road marking to road on the way Know and other road markings carry out extraction of semantics, to obtain the roads such as lane line, road surface identification, traffic lights, traffic mark Identification information.
Binocular camera shooting head unit 2 is connected to Acquisition Processor unit 5, for using the edge on binocular camera acquisition road Way image data, input of the image data as Acquisition Processor unit 5 are used for lane line, road surface identification and other roads Mark carries out feature point extraction and matching, obtains lane line, road surface identification and other road markings relative to binocular camera Spatial relation to obtain the road markings such as lane line, traffic lights, lamp stand at a distance from binocular camera, and then obtains The depth information of object and geometry feature in the visual field.
Inertial Measurement Unit 3 is connect with processor unit 5, and the posture information for real-time monitoring acquisition system is to assist Acquisition Processor unit 5 is handled, while the information such as curvature, the gradient, inclination angle for feeding back road curvature, and short in GPS signal It can assist carrying out dead reckoning during temporarily losing.
GPS unit 4 is connect with Acquisition Processor unit 5, for receiving GPS satellite signal, determines that the position of motor vehicle is sat Mark, and then the position coordinates of binocular camera are obtained, and carry out data fusion with binocular camera shooting head unit, to obtain lane The location coordinate information of line, road surface identification and other road markings.
Acquisition Processor unit 5, for synchronizing control, figure to monocular cam unit 1, binocular camera shooting head unit 2 Picture processing and information tentatively merge, and control the information of acquisition passing to map creation device 120.
Data transmission unit 6, the data for exporting Acquisition Processor unit 5 are transferred to map creation device 120.
Data transmission unit 7 for receiving the data sent from acquisition device 110, and is transferred to maps processing device unit 8 Data processing is carried out, and then generates high-precision map.
Maps processing device unit 8 generates accurately for carrying out fusion treatment to the data from acquisition device 110 Figure, and the high-precision map of generation is saved in memory 9.
Memory 9, for storing the high-precision map generated.
Wherein, Acquisition Processor unit 5 include at least main control chip circuit and periphery power supply circuit, storage circuit and External communication interface (network interface perhaps USB etc.) circuit main control chip can be the processing of the different frameworks such as ARM, FPGA or TX2 Device.
Binocular camera shooting head unit 2, monocular cam unit 1 and the interface of Acquisition Processor unit 5 include but is not limited to MIPI interface, LVDS interface.
Since Rolling shutter camera is easy to produce jelly effect when objects at high speed moves, it is easier to lead in the present system Cause the mismatch of binocular camera and monocular cam imaging, therefore binocular camera shooting head unit, monocular cam list in the application First internal sensitive chip is CMOS type, global shutter.
Binocular camera shooting head unit is unlimited, and using black and white, perhaps colour imagery shot is not limited using natural light or infrared yet surely Etc. other forms camera, monocular cam do not limit equally using colored or black and white camera.
The typical application scenarios of one of image capturing system and image-pickup method of the invention are for automatic Pilot The acquisition of the high-precision map of vehicle.When Fig. 2 shows carrying out the acquisition of high-precision map, which is mounted on motor vehicle The schematic diagram of typical environment when upper progress Image Acquisition, to illustrate described image acquisition system and image-pickup method.Monocular Camera unit 1 and binocular camera shooting head unit 2 are for example directed to the driving direction of motor vehicle.Monocular cam unit 1 and binocular Surrounding's image of the equal real-time detection motor vehicle of camera unit 2.Image detected is by monocular cam unit 1 and binocular camera shooting Head unit 2 is transferred to Acquisition Processor unit 5.
Monocular cam unit 1 and binocular camera shooting head unit 2, which are transferred in the image of Acquisition Processor unit 5, has the visual field 21, wherein the visual field 21 has scheduled size dimension.In the visual field 21 with where motor vehicle lane 22, in lane 22 Road surface identification 23 and lane 22 outside ambient enviroment.Wherein, which has left-lane line 221 and right-lane line 222.Wherein, there are other road road signs such as road indicator, traffic lights, light pole, building in the ambient enviroment outside lane 22 Know 24.
Acquisition Processor unit 5 is by the method for deep learning or other well known image recognitions to monocular cam unit The image of 1 acquisition carries out image recognition, can identify and determine left-lane line 221 in the visual field 21 of motor vehicle, right lane Line 222, and then obtain Lane detection information.Specific image-recognizing method is it is known in the art, not doing herein specific It limits.
The image of Acquisition Processor unit 5 is transferred to based on binocular camera shooting head unit 2, it, can in such a way that binocular positions To position to left-lane line 221 and right-lane line 222, i.e., left-lane line can be determined by Acquisition Processor unit 5 221 and spatial relation of the right-lane line 222 relative to binocular camera shooting head unit 2.Realizing the technological means of binocular positioning is It is known in the art that being not specifically limited herein.
Monocular cam unit 1, binocular camera shooting head unit 2, Inertial Measurement Unit 3 and the relative position of motor vehicle are fixed, Each unit coordinate system can unify the coordinate system to motor vehicle, that is, can unify to arrive longitude and latitude and height by transformation each unit Coordinate system in.In the good situation of GPS signal, the changing coordinates information of motor vehicle can be determined by GPS unit 4, And then obtain the changing coordinates information of binocular camera shooting head unit 2.According to left-lane line 221 and right-lane line 222 and binocular camera shooting The spatial relation of head unit 2 can determine left vehicle in conjunction with the current latitude and longitude coordinates information of binocular camera shooting head unit 2 The present co-ordinate position information of diatom 221 and right-lane line 222.I.e.
X=X1+X2+a,
Y=Y1+Y2+b,
Z=Z1+Z2+c.
Wherein, coordinate (X, Y, Z) is the co-ordinate position information of lane line 221,222, and coordinate (X1, Y1, Z1) is lane line 221,222 spatial positional information relative to binocular camera 2, coordinate (X2, Y2, Z2) are the changing coordinates information of motor vehicle, (a, b, c) is fixed numbers, represents fixed installation position of the binocular camera shooting head unit 2 on motor vehicle.
And in the case where GPS signal is weaker or loses and is unable to complete to the accurate positioning of motor vehicle 10, camera can be passed through Unit and Inertial Measurement Unit 3 extrapolate the changing coordinates information of motor vehicle using SLAM or VIO technology.Specifically reckoning mode is It is well known, it is not specifically limited herein.
Acquisition Processor unit 5 detects characteristic point from the image that monocular cam unit 1 transmits, by current frame image The characteristic point of characteristic point and previous frame image carries out characteristic matching, by matched characteristic point, it is estimated that the movement of camera, To get posture and change in location information of the current frame image relative to previous frame image.Inertial Measurement Unit 3 is usually Multi-axial accelerometer and/or gyroscope, the posture information for the real-time monitoring image capturing system.By what is obtained by image Posture and change in location information are filtered and merge with the posture information obtained by Inertial Measurement Unit 3, and combine GPS mono- The position coordinates that member provides are modified, and can obtain the accurate motion track of motor vehicle, and then available present road The information such as curvature, the gradient, inclination angle.
First camera unit 1 and second camera unit 2 can be carried out the acquisition of image by same period, certainly, Different image acquisition periods can be used, is not specifically limited herein, but the first camera unit 1 and second camera list The image of 2 acquisition of member should be alignment, i.e., depth information, the structure feature of object etc. that binocular camera shooting head unit 2 obtains count It should match according to the result extracted with monocular cam unit 1.Likewise, Inertial Measurement Unit 3 and GPS unit 4 obtain The period of access evidence can be the integral multiple of image acquisition period, but the data of Inertial Measurement Unit 3 and GPS unit 4 and the The data that one camera unit 1 and second camera unit 2 acquire should be alignment.It is right this can be obtained by synchronization mechanism Homogeneous relation, it can be hardware synchronization mechanism that this, which is synchronized, such as Acquisition Processor unit 5 can be obtained accurately by GPS unit 4 Temporal information, and can periodically send the triggering (rising edge or failing edge triggering) that a signal carries out Image Acquisition.Respectively The unit synchronous period can be different, such as the frequency of the more new data of Inertial Measurement Unit 3 is typically much higher than the acquisition of GPS unit 4 The frequency of data, but the data that obtain of each unit press the setting period and have an accurately timestamp.
Likewise, Acquisition Processor unit 5 passes through deep learning or the side of other well known image recognitions in the visual field 21 The image that method acquires monocular cam unit 1 carries out image recognition, in the lane 22 that can identify and determine motor vehicle Road surface identification 23, and then obtain road surface identification identification information.In Fig. 2, which is shown as arrow, however practical On, which is not limited to arrow, such as the identification informations (not shown) such as can also be speed limit, bus zone.Tool The image-recognizing method of body is it is known in the art, being not specifically limited herein.
The image of Acquisition Processor unit 5 is transferred to based on binocular camera shooting head unit 2, it, can in such a way that binocular positions The positioning that relative position is carried out with road pavement mark 23, i.e., can determine 23 phase of road surface identification by Acquisition Processor unit 5 For the spatial relation of binocular camera shooting head unit 2.The technological means for realizing binocular positioning is it is known in the art that herein not It is specifically limited.
The changing coordinates information of motor vehicle can be determined by GPS unit 4, and then obtains binocular camera shooting head unit 2 Changing coordinates information.According to the spatial relation of road surface identification 23 and binocular camera shooting head unit 2, in conjunction with binocular camera shooting head unit 2 current latitude and longitude coordinates information, can determine the present co-ordinate position information of road surface identification 23.
The Lane detection information and left-lane line 221 of a series of left-lane lines 221 and right-lane line 222 and right vehicle The co-ordinate position information of diatom 222 forms lane line information, a series of road surface identification identification information of road surface identifications 23 and The co-ordinate position information of road surface identification 23 forms road surface identification information.It, should by the data transmission unit 6 of acquisition device 110 Lane line information and road surface identification information can be transferred to map creation device 120, and in map creation device 120 Generate high-precision map.
In addition, in the ambient enviroment outside lane 22, also there are other roads for determining Orientation on map in the visual field 21 Line, such as traffic lights, traffic signboard, light pole, building etc..These road markings can equally be collected processor list Member 5 is identified and is determined.
Acquisition Processor unit 5 is by the method for deep learning or other well known image recognitions to monocular cam unit The image of 1 acquisition carries out image recognition, can identify and determine other road markings outside the lane 22 of motor vehicle, and then obtain Obtain the identification information of the road markings.Specific image-recognizing method is it is known in the art, being not specifically limited herein.
The image of Acquisition Processor unit 5 is transferred to based on binocular camera shooting head unit 2, it, can in such a way that binocular positions To carry out the positioning of relative position to the road markings, i.e., the road markings phase can be determined by Acquisition Processor unit 5 For the spatial relation of binocular camera shooting head unit 2.The technological means for realizing binocular positioning is it is known in the art that herein not It is specifically limited.
The changing coordinates information of motor vehicle can be determined by GPS unit 4, and then obtains binocular camera shooting head unit 2 Changing coordinates information.According to the spatial relation of the road markings and binocular camera shooting head unit 2, in conjunction with binocular camera shooting head unit 2 current latitude and longitude coordinates information, can determine the present co-ordinate position information of the road markings.
The road markings identification information of series of road mark and the co-ordinate position information of road markings form road Identification information.By the data transmission unit 6 of acquisition device 110, it is raw which can equally be transferred to map At device 120, and for generating high-precision map in map creation device 120.
Acquisition Processor unit 5 detects characteristic point from the image that monocular cam unit 1 transmits, by current frame image The characteristic point of characteristic point and previous frame image carries out characteristic matching, by matched characteristic point, it is estimated that the movement of camera, To get posture and change in location information of the current frame image relative to previous frame image.Inertial Measurement Unit 3 is usually Multi-axial accelerometer and/or gyroscope, the posture information for the real-time monitoring image capturing system.By what is obtained by image Posture and change in location information are filtered and merge with the posture information obtained by Inertial Measurement Unit 3, and combine GPS mono- The position coordinates that member provides are modified, and can obtain the accurate motion track of motor vehicle, and then available present road The information such as curvature, the gradient, inclination angle.
Pass through the data transmission unit 6 of acquisition device 110, the motion profile of motor vehicle, the i.e. curvature of present road, slope The information such as degree, inclination angle can equally be transferred to map creation device 120, and high for generating in map creation device 120 Precision map.
Map creation device 120 receives lane line information, the road surface that acquisition device 110 transmits by data transmission unit 7 The information such as the curvature of identification information and present road, the gradient, inclination angle, map creation device 120 pass through maps processing device unit 8 merge these information datas, so as to form with lane 22, road surface identification 23 and other road markings information Accurately Figure 200.The mode of the fusion can for example be carried out using known SLAM technology.
The application is melted vision, inertia and GPS information by three mesh cameras, Inertial Measurement Unit and GPS unit It closes, may be implemented to obtain high-precision positioning with the sensor of lower cost, while using SLAM technology generation map or more New map.
The high-precision map can be used for advanced driving assistance system and automatic driving vehicle.
It should be noted that the image capturing system should be demarcated before use, which includes to monocular cam The calibration of unit 1, binocular camera shooting head unit 2 internal reference and outer ginseng also includes to monocular cam unit 1, binocular camera shooting head unit 2 With the calibration of the relative position of Inertial Measurement Unit 3.The basic principle that binocular camera measures distance is triangulation, and two are taken the photograph As the same scene of head bat, same scene has different perspectives, measures distance by the difference of image.If binocular camera is clapped It cannot be synchronized when taking the photograph, will result in the dislocation of ranging source reference object, lead to large error.Therefore if necessary, scheme to improve As the precision of acquisition, binocular camera shooting head unit 2 can individually be matched.
Also, the parameters such as exposure, the white balance of monocular cam unit 1 and binocular camera shooting head unit 2 are pressed by processor Unit algorithm process control.
The present invention also provides a kind of image-pickup methods based on three mesh cameras.As shown in figure 3, a kind of for the present invention The flow chart of image-pickup method based on three mesh cameras.The acquisition method includes:
S10, the image detected to monocular cam unit 1 identify, obtain road markings identification information;
S20 carries out binocular positioning to the image that binocular camera shooting head unit 2 detects, obtains the location information of road markings;
S30 obtains the motion profile of three mesh cameras;
S40, based on the motion profile of road markings identification information, the location information of road markings and three mesh cameras, Generate high-precision map.
Referring to Fig. 2 and Fig. 3, a typical application scenarios of image capturing system of the invention and image-pickup method are The acquisition of high-precision map for automatic driving vehicle.When Fig. 2 shows carrying out the acquisition of high-precision map, the image capturing system It is mounted on the schematic diagram of the typical environment 20 when carrying out Image Acquisition on motor vehicle, to illustrate described image acquisition system and image Acquisition method.Monocular cam unit 1 and binocular camera shooting head unit 2 are for example directed to the driving direction 21 of motor vehicle.Monocular is taken the photograph As surrounding's image of head unit 1 and the environment 20 of the equal real-time detection motor vehicle of binocular camera shooting head unit 2.Image detected is by list Mesh camera unit 1 and binocular camera shooting head unit 2 are transferred to Acquisition Processor unit 5.
Monocular cam unit 1 and binocular camera shooting head unit 2, which are transferred in the image of Acquisition Processor unit 5, has the visual field 21, wherein the visual field 21 has scheduled size dimension.In the visual field 21 with where motor vehicle lane 22, in lane 22 Road surface identification 23 and lane 22 outside ambient enviroment.Wherein, which has left-lane line 221 and right-lane line 222.Wherein, there are other road road signs such as road indicator, traffic lights, light pole, building in the ambient enviroment outside lane 22 Know.
First camera unit 1 and second camera unit 2 can be carried out the acquisition of image by same period, certainly, Different image acquisition periods can be used, is not specifically limited herein, but the first camera unit 1 and second camera list The image of 2 acquisition of member should be alignment, i.e., depth information, the structure feature of object etc. that binocular camera shooting head unit 2 obtains count It should match according to the result extracted with monocular cam unit 1.Likewise, Inertial Measurement Unit 3 and GPS unit 4 obtain The period of access evidence can be the integral multiple of image acquisition period, but the data of Inertial Measurement Unit 3 and GPS unit 4 and the The data that one camera unit 1 and second camera unit 2 acquire should be alignment.It is right this can be obtained by synchronization mechanism Homogeneous relation, it can be hardware synchronization mechanism that this, which is synchronized, such as Acquisition Processor unit 5 can be obtained accurately by GPS unit 4 Temporal information, and can periodically send the triggering (rising edge or failing edge triggering) that a signal carries out Image Acquisition.Respectively The unit synchronous period can be different, such as the frequency of the more new data of Inertial Measurement Unit 3 is typically much higher than the acquisition of GPS unit 4 The frequency of data, but the data that obtain of each unit press the setting period and have an accurately timestamp.
Specifically, this method comprises:
S10, the image detected to monocular cam unit 1 identify, obtain road markings identification information.
Acquisition Processor unit 5 is by the method for deep learning or other well known image recognitions to monocular cam unit The image of 1 acquisition carries out image recognition, can identify and determine left-lane line 221 in the visual field 21 of motor vehicle, right lane Line 222, and then obtain Lane detection information.
Likewise, Acquisition Processor unit 5 passes through deep learning or the side of other well known image recognitions in the visual field 21 The image that method acquires monocular cam unit 1 carries out image recognition, in the lane 22 that can identify and determine motor vehicle Road surface identification 23, and then obtain road surface identification identification information.
Acquisition Processor unit 5 is by the method for deep learning or other well known image recognitions to monocular cam unit The image of 1 acquisition carries out image recognition, can also identify and determine other road markings outside the lane 22 of motor vehicle, in turn Obtain the identification information of the road markings.
Specific image-recognizing method is it is known in the art, being not specifically limited herein.
Wherein, the identification information such as can be arrow, speed limit, bus zone of road surface identification 23.
Other road markings in ambient enviroment in the visual field 21 outside lane 22, including but not limited to traffic lights, traffic mark Know board, light pole, building etc..These road markings can equally be collected the identification of processor unit 5 and determine.
S20 carries out binocular positioning to the image of binocular camera shooting head unit 2, obtains the location information of road markings.
The principle that the step is positioned by binocular, the image detected to binocular camera shooting head unit 2 are handled, and then are obtained The location information of road markings in image, as shown in figure 4, the step specifically includes:
S201 carries out binocular positioning to the image that binocular camera shooting head unit 2 detects, obtains road markings and take the photograph relative to binocular As the spatial positional information of head unit 2.
The image of Acquisition Processor unit 5 is transferred to based on binocular camera shooting head unit 2, it, can in such a way that binocular positions To position to left-lane line 221 and right-lane line 222, i.e., left-lane line can be determined by Acquisition Processor unit 5 221 and spatial relation of the right-lane line 222 relative to binocular camera shooting head unit 2.
Likewise, being transferred to the image of Acquisition Processor unit 5 based on binocular camera shooting head unit 2, positioned by binocular Mode can be identified 23 positioning for carrying out relative position with road pavement, i.e., can determine road marking by Acquisition Processor unit 5 Know 23 spatial relation relative to binocular camera shooting head unit 2.
In addition, the image of Acquisition Processor unit 5 is transferred to based on binocular camera shooting head unit 2, the side positioned by binocular Formula can carry out the positioning of relative position to other road markings such as traffic lights, traffic signboard, light pole, building, that is, pass through Acquisition Processor unit 5 can determine spatial relation of the road markings relative to binocular camera shooting head unit 2.
The technological means for realizing binocular positioning is it is known in the art that being not specifically limited herein.
S202 obtains the coordinate information of binocular camera by GPS unit 4.
The latitude and longitude coordinates information of motor vehicle can be determined by GPS unit 4, and due to binocular camera shooting head unit 2 It is to be fixedly mounted on motor vehicle, therefore, the coordinate information of binocular camera shooting head unit 2 can be obtained by simple computation.
S203, according to the spatial relation of road markings and binocular camera shooting head unit 2, in conjunction with binocular camera shooting head unit 2 Coordinate information, determine the co-ordinate position information of road markings.
By taking lane line 221,222 as an example, in the good situation of GPS signal, it can be determined by GPS unit 4 motor-driven The changing coordinates information of vehicle, and then obtain the changing coordinates information of binocular camera shooting head unit 2.According to left-lane line 221 and right vehicle The spatial relation of diatom 222 and binocular camera shooting head unit 2 is believed in conjunction with the current latitude and longitude coordinates of binocular camera shooting head unit 2 Breath, can determine the present co-ordinate position information of left-lane line 221 and right-lane line 222.I.e.
X=X1+X2+a,
Y=Y1+Y2+b,
Z=Z1+Z2+c.
Wherein, coordinate (X, Y, Z) is the co-ordinate position information of lane line 221,222, and coordinate (X1, Y1, Z1) is lane line 221,222 spatial positional information relative to binocular camera 2, coordinate (X2, Y2, Z2) are the changing coordinates information of motor vehicle, (a, b, c) is fixed numbers, represents fixed installation position of the binocular camera shooting head unit 2 on motor vehicle.
Likewise, the road markings such as road surface identification, traffic lights can be calculated by similar mode, and then can obtain Obtain the co-ordinate position information of road markings.
And in the case where GPS signal is weaker or loses and is unable to complete to the accurate positioning of motor vehicle 10, camera can be passed through Unit and Inertial Measurement Unit 3 extrapolate the changing coordinates information of motor vehicle using SLAM or VIO technology.Specifically reckoning mode is It is well known, it is not specifically limited herein.
S30 obtains the motion profile of three mesh cameras.
Acquisition Processor unit 5 detects characteristic point from the image that monocular cam unit 1 transmits, by current frame image The characteristic point of characteristic point and previous frame image carries out characteristic matching, by matched characteristic point, it is estimated that the movement of camera, To get posture and change in location information of the current frame image relative to previous frame image.Inertial Measurement Unit 3 is usually Multi-axial accelerometer and/or gyroscope, the posture information for the real-time monitoring image capturing system.By what is obtained by image Posture and change in location information are filtered and merge with the posture information obtained by Inertial Measurement Unit 3, and combine GPS mono- The position coordinates that member provides are modified, and can obtain the accurate motion track of motor vehicle, which contains image and adopt Athletic posture information in collecting system collection process passes through the information such as its curvature, the gradient, inclination angle that can obtain road.
S40, the motion profile based on road markings information and its position and three mesh cameras generate high-precision map.
The Lane detection information and left-lane line 221 of a series of left-lane lines 221 and right-lane line 222 and right vehicle The co-ordinate position information of diatom 222 forms lane line information, a series of road surface identification identification information of road surface identifications 23 and The co-ordinate position information of road surface identification 23 forms road surface identification information, and a series of road markings of other road markings identifies letter The co-ordinate position information of breath and road markings forms road markings information.Pass through the data transmission unit of acquisition device 110 6, the lane line information, road surface identification information and road markings information can be transferred to map creation device 120, and be used for High-precision map is generated in map creation device 120.
Likewise, passing through the data transmission unit 6 of acquisition device 110, the motion profile of three mesh cameras can also be passed Map creation device 120 is transported to, and for generating high-precision map in map creation device 120.
Map creation device 120 by data transmission unit 7 receive acquisition device 110 transmit road markings information, with And three mesh camera motion profile representated by information, the map creation device 120 such as curvature, the gradient, inclination angle of road pass through Maps processing device unit 8 merges these information datas, so as to form having driving 22, road surface identification 23 and other Accurately Figure 200 of road markings information.The mode of the fusion can for example be carried out using known SLAM technology.
In the embodiment of the present invention, using three mesh cameras, Inertial Measurement Unit and GPS unit, by vision, inertia and The fusion of GPS may be implemented to obtain high-precision positioning with the sensor of lower cost, to realize the high-precision of lower cost Map data collecting.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the module, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another system is closed or is desirably integrated into, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or logical of device or unit Letter connection can be electrical property, mechanical or other forms.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of image-pickup method based on three mesh cameras, includes the following steps:
S10 identifies the road markings in image, and obtain road markings based on the image of monocular cam unit (1) detection Identification information;
S20 obtains the location information of road markings based on the image of binocular camera shooting head unit (2) detection;
S40, the location information of identification information, road markings based on road markings generate high-precision map.
2. according to the method described in claim 1, wherein, step S20 includes,
S201 carries out binocular positioning to the image of binocular camera shooting head unit (2) acquisition, obtains road markings relative to binocular camera shooting The spatial positional information of head unit (2);
S202 obtains the coordinate information of binocular camera by GPS unit (4);
S203, according to the spatial relation of road markings and binocular camera shooting head unit (2), in conjunction with binocular camera shooting head unit (2) Coordinate information, determine the location information of road markings.
3. method according to claim 1 or 2, wherein in the case where GPS signal is weaker or loses, taken the photograph by monocular As head unit (1) and Inertial Measurement Unit (3) extrapolate the changing coordinates information of motor vehicle.
4. method according to claim 1-3, which is characterized in that further include before step S40,
S30 obtains the motion profile of three mesh cameras;
Also, in step s 40, based on road markings identification information, the location information of road markings and three mesh cameras Motion profile generates high-precision map.
5. according to the method described in claim 4, wherein, step S30 includes,
The current frame image of monocular cam unit (1) and previous frame image are carried out characteristic matching, estimate camera by S301 Movement, posture and change in location information of the current frame image obtained relative to previous frame image;
S302 passes through the posture information of Inertial Measurement Unit (3) real-time monitoring image capturing system;
S303 believes the posture obtained by image and change in location information with the posture obtained by Inertial Measurement Unit (3) Breath is filtered and merges, and the position coordinates for combining GPS unit (4) to obtain are modified, and obtains the movement of three mesh cameras Track.
6. method according to claim 1-5, wherein Inertial Measurement Unit (3) be multi-axial accelerometer and/or Gyroscope.
7. method according to claim 1-6, wherein the motion profile of three mesh cameras includes the song of road Rate, the gradient, obliquity information.
8. method according to claim 1-7, wherein road markings is lane line, arrow, traffic lights, traffic Sign Board, light pole, and/or building.
9. a kind of image capturing system based on three mesh cameras, including at least one acquisition device (110) and map generate dress Set (120);
At least one acquisition device (110) includes monocular cam unit (1), binocular camera shooting head unit (2), Inertial Measurement Unit (3), GPS unit (4), Acquisition Processor unit (5) and data transmission unit (6);
Map creation device (120) includes data transmission unit (7), maps processing device unit (8) and memory (9);Wherein,
Monocular cam unit (1), for carrying out extraction of semantics to the road markings of road on the way, to obtain road markings knowledge Other information;
Binocular camera shooting head unit (2) obtains road markings relative to double for carrying out feature point extraction and matching to road markings The spatial relation of mesh camera unit (2), to obtain road markings at a distance from binocular camera, and then obtains the visual field The depth information and geometry feature of interior object;
Inertial Measurement Unit (3) for the posture information of real-time monitoring acquisition system, while feeding back the curvature of road curvature, slope Degree, change of pitch angle, and auxiliary carries out dead reckoning during GPS signal transient loss;
GPS unit (4) carries out data for obtaining the position coordinates of binocular camera shooting head unit (2), and with binocular camera shooting head unit Fusion, to obtain the location coordinate information of road markings;
Acquisition Processor unit (5), for monocular cam unit (1), binocular camera shooting head unit (2) synchronize control, Image procossing and information fusion, and the data of acquisition are sent to map creation device (120);
Maps processing device unit (8) generates accurately for carrying out fusion treatment to the data from acquisition device (110) Figure, and the high-precision map of generation is saved in memory (9).
10. system according to claim 9, wherein map creation device (120) is that remote server or vehicle end service Device.
CN201811392498.5A 2018-11-21 2018-11-21 A kind of image capturing system and method based on three mesh cameras Pending CN109374008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811392498.5A CN109374008A (en) 2018-11-21 2018-11-21 A kind of image capturing system and method based on three mesh cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811392498.5A CN109374008A (en) 2018-11-21 2018-11-21 A kind of image capturing system and method based on three mesh cameras

Publications (1)

Publication Number Publication Date
CN109374008A true CN109374008A (en) 2019-02-22

Family

ID=65376820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811392498.5A Pending CN109374008A (en) 2018-11-21 2018-11-21 A kind of image capturing system and method based on three mesh cameras

Country Status (1)

Country Link
CN (1) CN109374008A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047373A (en) * 2019-04-28 2019-07-23 北京超维度计算科技有限公司 A kind of high-precision map generation system based on common application processor
CN110047372A (en) * 2019-04-28 2019-07-23 北京超维度计算科技有限公司 A kind of high-precision map generation system based on Reconfigurable Computation
CN110187371A (en) * 2019-06-03 2019-08-30 福建工程学院 A kind of unmanned high-precision locating method and system based on street lamp auxiliary
CN110262521A (en) * 2019-07-24 2019-09-20 北京智行者科技有限公司 A kind of automatic Pilot control method
CN110321877A (en) * 2019-06-04 2019-10-11 中北大学 Three mesh rearview mirrors of one kind and trinocular vision safe driving method and system
CN110712645A (en) * 2019-10-18 2020-01-21 北京经纬恒润科技有限公司 Method and system for predicting relative position of target vehicle in blind area
CN110889378A (en) * 2019-11-28 2020-03-17 湖南率为控制科技有限公司 Multi-view fusion traffic sign detection and identification method and system
CN111010532A (en) * 2019-11-04 2020-04-14 武汉理工大学 Vehicle-mounted machine vision system based on multi-focal-length camera group and implementation method
CN111192341A (en) * 2019-12-31 2020-05-22 北京三快在线科技有限公司 Method and device for generating high-precision map, automatic driving equipment and storage medium
CN111274974A (en) * 2020-01-21 2020-06-12 北京百度网讯科技有限公司 Positioning element detection method, device, equipment and medium
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111469778A (en) * 2020-04-23 2020-07-31 福建农林大学 Road inspection device based on binocular photogrammetry and combined positioning
CN111881824A (en) * 2020-07-28 2020-11-03 广东电网有限责任公司 Indoor map acquisition method and system based on image recognition
CN112305576A (en) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 Multi-sensor fusion SLAM algorithm and system thereof
WO2021103512A1 (en) * 2019-11-26 2021-06-03 Suzhou Zhijia Science & Technologies Co., Ltd. Method and apparatus for generating electronic map
CN112989909A (en) * 2019-12-17 2021-06-18 通用汽车环球科技运作有限责任公司 Road attribute detection and classification for map enhancement
CN113034586A (en) * 2021-04-27 2021-06-25 北京邮电大学 Road inclination angle detection method and detection system
CN113033253A (en) * 2019-12-24 2021-06-25 北京车和家信息技术有限公司 Camera calibration method and device
WO2021147391A1 (en) * 2020-01-21 2021-07-29 魔门塔(苏州)科技有限公司 Map generation method and device based on fusion of vio and satellite navigation system
CN115265584A (en) * 2022-06-30 2022-11-01 湖南凌翔磁浮科技有限责任公司 GPS-based auxiliary positioning method and device for track detection equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN106017486A (en) * 2016-05-16 2016-10-12 浙江大学 Trajectory inflection point filter-based map location method for unmanned vehicle navigation
CN106919915A (en) * 2017-02-22 2017-07-04 武汉极目智能技术有限公司 Map road mark and road quality harvester and method based on ADAS systems
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN108257161A (en) * 2018-01-16 2018-07-06 重庆邮电大学 Vehicle environmental three-dimensionalreconstruction and movement estimation system and method based on polyphaser

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN106017486A (en) * 2016-05-16 2016-10-12 浙江大学 Trajectory inflection point filter-based map location method for unmanned vehicle navigation
CN106919915A (en) * 2017-02-22 2017-07-04 武汉极目智能技术有限公司 Map road mark and road quality harvester and method based on ADAS systems
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN108257161A (en) * 2018-01-16 2018-07-06 重庆邮电大学 Vehicle environmental three-dimensionalreconstruction and movement estimation system and method based on polyphaser

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047372A (en) * 2019-04-28 2019-07-23 北京超维度计算科技有限公司 A kind of high-precision map generation system based on Reconfigurable Computation
CN110047373A (en) * 2019-04-28 2019-07-23 北京超维度计算科技有限公司 A kind of high-precision map generation system based on common application processor
CN110187371A (en) * 2019-06-03 2019-08-30 福建工程学院 A kind of unmanned high-precision locating method and system based on street lamp auxiliary
CN110321877A (en) * 2019-06-04 2019-10-11 中北大学 Three mesh rearview mirrors of one kind and trinocular vision safe driving method and system
CN110262521A (en) * 2019-07-24 2019-09-20 北京智行者科技有限公司 A kind of automatic Pilot control method
CN110712645A (en) * 2019-10-18 2020-01-21 北京经纬恒润科技有限公司 Method and system for predicting relative position of target vehicle in blind area
CN111010532A (en) * 2019-11-04 2020-04-14 武汉理工大学 Vehicle-mounted machine vision system based on multi-focal-length camera group and implementation method
WO2021103512A1 (en) * 2019-11-26 2021-06-03 Suzhou Zhijia Science & Technologies Co., Ltd. Method and apparatus for generating electronic map
CN110889378A (en) * 2019-11-28 2020-03-17 湖南率为控制科技有限公司 Multi-view fusion traffic sign detection and identification method and system
CN112989909A (en) * 2019-12-17 2021-06-18 通用汽车环球科技运作有限责任公司 Road attribute detection and classification for map enhancement
CN113033253A (en) * 2019-12-24 2021-06-25 北京车和家信息技术有限公司 Camera calibration method and device
CN111192341A (en) * 2019-12-31 2020-05-22 北京三快在线科技有限公司 Method and device for generating high-precision map, automatic driving equipment and storage medium
WO2021147391A1 (en) * 2020-01-21 2021-07-29 魔门塔(苏州)科技有限公司 Map generation method and device based on fusion of vio and satellite navigation system
CN111274974A (en) * 2020-01-21 2020-06-12 北京百度网讯科技有限公司 Positioning element detection method, device, equipment and medium
CN111274974B (en) * 2020-01-21 2023-09-01 阿波罗智能技术(北京)有限公司 Positioning element detection method, device, equipment and medium
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111428663B (en) * 2020-03-30 2023-08-29 阿波罗智能技术(北京)有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111469778A (en) * 2020-04-23 2020-07-31 福建农林大学 Road inspection device based on binocular photogrammetry and combined positioning
CN111881824A (en) * 2020-07-28 2020-11-03 广东电网有限责任公司 Indoor map acquisition method and system based on image recognition
CN111881824B (en) * 2020-07-28 2024-07-02 广东电网有限责任公司 Indoor map acquisition method and system based on image recognition
CN112305576A (en) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 Multi-sensor fusion SLAM algorithm and system thereof
CN113034586A (en) * 2021-04-27 2021-06-25 北京邮电大学 Road inclination angle detection method and detection system
CN113034586B (en) * 2021-04-27 2022-09-23 北京邮电大学 Road inclination angle detection method and detection system
CN115265584A (en) * 2022-06-30 2022-11-01 湖南凌翔磁浮科技有限责任公司 GPS-based auxiliary positioning method and device for track detection equipment

Similar Documents

Publication Publication Date Title
CN109374008A (en) A kind of image capturing system and method based on three mesh cameras
Yan et al. EU long-term dataset with multiple sensors for autonomous driving
Zhu et al. The multivehicle stereo event camera dataset: An event camera dataset for 3D perception
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
CN111448476B (en) Technique for sharing mapping data between unmanned aerial vehicle and ground vehicle
EP2208021B1 (en) Method of and arrangement for mapping range sensor data on image sensor data
CN106650705B (en) Region labeling method and device and electronic equipment
JP4767578B2 (en) High-precision CV calculation device, CV-type three-dimensional map generation device and CV-type navigation device equipped with this high-precision CV calculation device
JP4232167B1 (en) Object identification device, object identification method, and object identification program
JP2018028489A (en) Position estimation device and position estimation method
KR101220527B1 (en) Sensor system, and system and method for preparing environment map using the same
US20090262974A1 (en) System and method for obtaining georeferenced mapping data
CN104217439A (en) Indoor visual positioning system and method
CN113678079A (en) Generating structured map data from vehicle sensors and camera arrays
KR101444685B1 (en) Method and Apparatus for Determining Position and Attitude of Vehicle by Image based Multi-sensor Data
JP2005268847A (en) Image generating apparatus, image generating method, and image generating program
JP4978615B2 (en) Target identification device
JP2013152219A (en) Speed measurement system, speed measurement method and program
CN108413965A (en) A kind of indoor and outdoor crusing robot integrated system and crusing robot air navigation aid
US20230138487A1 (en) An Environment Model Using Cross-Sensor Feature Point Referencing
CN112179362A (en) High-precision map data acquisition system and acquisition method
CN113296133A (en) Device and method for realizing position calibration based on binocular vision measurement and high-precision positioning fusion technology
CN112991440B (en) Positioning method and device for vehicle, storage medium and electronic device
CN105807083A (en) Real-time speed measuring method and system for unmanned aerial vehicle
KR20160099336A (en) Mobile mapping system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190222

WD01 Invention patent application deemed withdrawn after publication