CN110308720A - A kind of unmanned dispenser and its navigation locating method, device - Google Patents

A kind of unmanned dispenser and its navigation locating method, device Download PDF

Info

Publication number
CN110308720A
CN110308720A CN201910543477.7A CN201910543477A CN110308720A CN 110308720 A CN110308720 A CN 110308720A CN 201910543477 A CN201910543477 A CN 201910543477A CN 110308720 A CN110308720 A CN 110308720A
Authority
CN
China
Prior art keywords
environment
passenger compartment
environment information
image
seat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910543477.7A
Other languages
Chinese (zh)
Other versions
CN110308720B (en
Inventor
程保山
郝立良
申浩
聂琼
王景恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910543477.7A priority Critical patent/CN110308720B/en
Publication of CN110308720A publication Critical patent/CN110308720A/en
Application granted granted Critical
Publication of CN110308720B publication Critical patent/CN110308720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the present application discloses a kind of unmanned dispenser and its navigation locating method, device.Method includes: the first environment information for the first video camera acquisition for obtaining the unmanned dispenser respectively and the second environment information of the second video camera acquisition;Passenger compartment number is obtained based on the first environment information, seat number is obtained based on the second environment information;Position of the unmanned dispenser in the public transport is determined by passenger compartment number and seat number.The navigator fix scheme of the embodiment of the present application, positioning accuracy is high, and low in cost, is suitble to large-scale promotion application.

Description

A kind of unmanned dispenser and its navigation locating method, device
Technical field
This application involves technical field of navigation and positioning, and in particular to a kind of unmanned dispenser and its navigation locating method, Device.
Background technique
Currently, unmanned dispenser (unmanned dispensing vehicle or unmanned dispensing machine people) location technology is mainly based upon laser Sensor, GPS (Global Positioning System, global positioning system), IMU (Inertial Measurement Unit, Inertial Measurement Unit) etc. Multi-sensor Fusions scheme, such as vision SLAM (SimultaneousLocalization And Mapping, immediately positioning and map structuring) technology, the technology need using laser sensor scan ambient enviroment structure, then It is converted by robot pose between matching the laser data calculating acquisition twice that front and back acquires twice, to realize positioning, but laser Sensor is expensive, and is only used for realizing global positioning calculation, and positioning accuracy is not high.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of unmanned dispenser and its navigation locating method, device, solve The not high problem of the positioning accuracy of the prior art unmanned dispenser, and it is low in cost, it is suitble to large-scale promotion application.
According to the one aspect of the application, a kind of navigator fix side of unmanned dispenser in public transport is provided Method, method include:
The first environment information and the second video camera of the first video camera acquisition of the unmanned dispenser are obtained respectively The second environment information of acquisition;
Passenger compartment number is obtained based on the first environment information, seat number is obtained based on the second environment information;
Determine the unmanned dispenser in the public transport by passenger compartment number and seat number Position.
According to further aspect of the application, a kind of navigator fix of unmanned dispenser in public transport is provided Device, the device include:
Obtain module, for obtain respectively the unmanned dispenser the first video camera acquire first environment information with And second video camera acquisition second environment information;
Navigation positioning module is believed for obtaining passenger compartment number based on the first environment information based on the second environment Breath obtains seat number;Determine the unmanned dispenser in the public transport by passenger compartment number and seat number Position in tool.
According to the another aspect of the application, a kind of unmanned dispenser is provided, the unmanned dispenser traveling exists In the passenger compartment of public transport, comprising: the first video camera, the second video camera and processor,
The first environment information is sent to the processing for acquiring first environment information by first video camera Device;
The second environment information is sent to the processing for acquiring second environment information by second video camera Device;
The processor is based on the second environment information for obtaining passenger compartment number based on the first environment information Obtain seat number;Determine the unmanned dispenser in the public transport work by passenger compartment number and seat number Position in tool.
According to another aspect of the application, a kind of non-transient computer readable storage medium is provided, is stored thereon There is computer program, which realizes the application one aspect the method when being executed by processor the step of.
The utility model has the advantages that the unmanned dispenser and its navigator fix scheme of the embodiment of the present application, by obtaining nobody respectively The environmental information of two video cameras acquisition of dispenser respectively obtains passenger compartment number and seat number based on environmental information, by Passenger compartment number and seat number determine position of the unmanned dispenser in public transport.It is acquired as a result, by video camera The environmental information of public transport, and visitor of the unmanned dispenser in public transport is obtained after being handled by environmental information Compartment number and seat number, positioning accuracy, which greatly improves, can navigate to specific seat, and the dispatching for meeting unmanned dispenser needs It asks, and does not need the hardware such as high-precision IMU sensor, it is low in cost, pre-production map datum is not needed yet, Calculating speed is fast, and location efficiency is high.
Detailed description of the invention
Fig. 1 is the process of the navigation locating method of unmanned dispenser in the public transport of the application one embodiment Figure;
Fig. 2 is the navigation locating method process of unmanned dispenser in the public transport of another embodiment of the application Schematic diagram;
Fig. 3 is the frame of the navigation positional device of unmanned dispenser in the public transport of the application one embodiment Figure;
Fig. 4 is the block diagram of the unmanned dispenser of the application one embodiment;
Fig. 5 is the structural schematic diagram of the non-transient computer readable storage medium of the application one embodiment.
Specific embodiment
To keep the above objects, features, and advantages of the embodiment of the present application more obvious and easy to understand, with reference to the accompanying drawing and Specific embodiment is described in further detail the embodiment of the present application.Obviously, described embodiment is the application one Divide embodiment, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making Every other embodiment obtained under the premise of creative work belongs to the range of the embodiment of the present application protection.
The technical concept of the application is, not for positioning accuracy present in the positioning of robot navigation in the prior art High, the preparatory constructing environment map of needs calculates the technical problems such as complexity based on multiple sensors data fusion, proposes a kind of be applicable in The navigator fix scheme of the unmanned dispenser moved in public transport, the program can precise measurement moved in passenger compartment The passenger compartment number and seat number of dynamic unmanned dispenser (such as unmanned dispensing vehicle or robot) realize high accuracy positioning to carry out goods Object dispenses automatically, meets practical application request.
Fig. 1 is the process of the navigation locating method of unmanned dispenser in the public transport of the application one embodiment Figure, referring to Fig. 1, the navigation locating method of unmanned dispenser includes: in the public transport of the present embodiment
Step S101 obtains the first environment information and the of the first video camera acquisition of the unmanned dispenser respectively The second environment information of two video cameras acquisition;
Step S102 obtains passenger compartment number based on the first environment information, obtains seat based on the second environment information Bit number;
Step S103 determines the unmanned dispenser in the public friendship by passenger compartment number and seat number Position in logical tool.
As shown in Figure 1 it is found that in the public transport of the present embodiment unmanned dispenser navigation locating method, pass through The environmental information of the first camera and second camera the acquisition public transport of unmanned dispenser, is based respectively on environment letter Breath is handled to obtain the passenger compartment number and seat number of unmanned dispenser, is obtained unmanned dispenser by passenger compartment number and seat number and is existed Position in public transport.It is heavier to not only meet the demand for realizing positioning in mobile public transport It wants, the other positioning of seat-level may be implemented in this method positioning accuracy height;Furthermore, it is not necessary that pre-production public transport Environmental map, calculating process is simple, high-efficient;Finally, not needing to pacify compared to the locating scheme by Data Fusion of Sensor High-precision sensor is filled, it is low in cost, it is suitble to large-scale promotion application.
It should be noted that the public transport of the present embodiment includes: Conventional trains, high-speed rail, aircraft etc., it is this kind of public The vehicles all have multiple passenger compartments and multiple seats, and unmanned dispenser (such as unmanned dispensing vehicle) can be in public transport work It is travelled on the passageway of tool.By taking high-speed rail as an example, unmanned dispenser travels on the passageway of high-iron carriage, is such as eaten with carrying out cargo The automatic dispatching of product, beverage etc..
Below by taking high-speed rail as an example, to the navigation locating method of unmanned dispenser in the public transport of the present embodiment Realize that each step is stressed.Fig. 2 is unmanned dispenser in the public transport of another embodiment of the application Navigation locating method flow diagram is equipped with the first video camera and the second video camera on the unmanned dispenser of the present embodiment, Process starts to execute the second environment of the first environment information for obtaining the acquisition of the first video camera respectively and the acquisition of the second video camera The step of information.
Referring to fig. 2, step S201 is executed, the image of the first video camera acquisition is obtained;
Here the image for obtaining the acquisition of the first video camera includes: to obtain to be mounted in front of the ontology of the unmanned dispenser First video camera acquisition first environment information, first environment information here is first environment image.Here may be used also To carry out distortion correction to first environment image to improve the problem of dtmf distortion DTMF of original image.
Step S210 obtains the image of the second video camera acquisition;
The image for obtaining the acquisition of the second video camera includes obtaining the body bottom portion two sides for being mounted on the unmanned dispenser Second video camera acquisition second environment information, second environment information be second environment image.
The field angle of the first video camera is greater than the field angle of the second video camera in the present embodiment.Select wide angle camera as the One video camera be since it is considered that high-speed rail train passenger compartment in structure is complicated, wide angle camera can acquisition passenger compartment as much as possible from The environmental structure characteristic information of top to bottm.
Next, obtaining passenger compartment number based on first environment information, seat number is obtained based on second environment information.It is based on It includes: to match first environment image with pre-stored first template image that first environment information, which obtains passenger compartment number,; If matching is consistent, the current passenger compartment number of statistics is added 1, and update the current passenger compartment number;Wherein, first mould The characteristic information of the run-through channel of record connection two adjacent sections passenger compartment in plate image.Seat is obtained based on the second environment information to compile It number include: to match second environment image with pre-stored second template image;If matching is consistent, by statistics Current seat number adds 1, and updates the current seat number;Wherein, seat in passenger compartment is recorded in second template image Characteristic information.It is illustrated individually below.
Referring to fig. 2 shown in left side, in the image for getting the acquisition of the first video camera, i.e., step is executed after first environment image S202, the matching of run-through channel template image;
Specifically, first environment image is matched with pre-stored first template image, in the first template image The characteristic information of the run-through channel of record connection two adjacent sections passenger compartment.It is passed through that is, being obtained in advance in this example with train car Channel template image.
Step S203, if successful match;
First environment image and the first template image (i.e. run-through channel Prototype drawing illustrated in Fig. 2 are judged in this step Picture) whether match unanimously twice, if first environment image matches unanimously twice with the first template image, confirm success.By Including the structure feature of run-through channel in run-through channel template image, and run-through channel is the specific knot for connecting two section adjacent compartments Structure can then determine that unmanned dispenser have passed through a section vehicle so matching unanimously if there is run-through channel template image twice Compartment.Entrance positioning in compartment is carried out by distinctive template image information at run-through channel sealing between compartment i.e. in the present embodiment. If thening follow the steps S204 without successful match.
Step S204, Hough transformation;
Here it carries out the edge line information that Hough transformation extracts in image and corrects unmanned dispenser headstock direction, with Improve positioning accuracy.It is difficult to avoid that and can encounter it is appreciated that unmanned dispenser such as unmanned vehicle travels in high-iron carriage Barrier, and for avoiding barrier, unmanned vehicle needs dynamically to adjust the headstock direction of oneself.When the first environment image with Pre-stored first template image mismatch when, show unmanned vehicle currently on the passageway of high-iron carriage rather than connecting Place, therefore can determine the need for changing head direction at this time according to first environment image, it specifically includes:
Hough transformation is carried out to first environment image, obtains a plurality of candidate straight line;Geometrical length, time according to candidate straight line Straight line and the angle of straight line where the side of public transport passenger compartment (such as high-iron carriage) is selected to screen candidate straight line, The candidate straight line parallel with straight line where the passageway of passenger compartment is obtained as reference line (i.e. lane line);To the reference line Back project is carried out, reference line is made to be parallel to the vertical direction of first environment image;If the benchmark after back project There are angles for the central axes of straight line and the first environment image, then according to the size of the angle adjustment unmanned dispatching dress The head direction (i.e. the headstock direction of unmanned vehicle) set;If reference line and the first environment image after back project Central axes be not present angle, then keep the head direction of the unmanned dispenser.
It should be noted that how to carry out Hough transformation to original image, that is, first environment image, inverse perspective view and more is obtained Item candidate's straight line is the prior art, and related to realize that details can be found in introduction in the prior art, which is not described herein again.According to candidate The angle pair of straight line where the side of the geometrical length of straight line, candidate straight line and public transport passenger compartment (such as high-iron carriage) Candidate straight line carries out screening, and the straight line by length less than such as 10 centimetres of pre-set length threshold filters out, because this kind of straight It is not lane line that line length is too short.
Step S205 adjusts head direction;
If determining the reference line after back project and first environment image in previous step, that is, step S204 There are angles for central axes, then the head of unmanned dispenser is adjusted according to the size of angle towards preventing unmanned vehicle from deviateing Road;How to adjust the cephalad direction of unmanned dispenser is the prior art, and which is not described herein again.
Step S206 obtains the mileage information of odometer acquisition;
In the present embodiment, the first environment image acquired by the first camera is matched with template image carries out passenger compartment number Positioning is likely to occur erroneous judgement, such as in one embodiment, from Section 9 compartment, destination locations are unmanned dispenser 13 section compartments, unmanned dispenser drive to Section 10 compartment from Section 9 compartment, pass through by connecting Section 9 compartment with Section 10 Channel A, and continue to drive to connection Section 10 compartment and Section 11 run-through channel B is met that is, continuously across two run-through channels The condition of run-through channel successful match twice in succession, but at this time if only adding 1 to be easy to lead compartment number by this condition Erroneous judgement is caused, in this regard, the present embodiment proposition is further added mileage information after run-through channel successful match twice and is determined.
Step S207 adds up mileage travelled;
The mileage travelled information acquired to odometer in step S206 adds up, and obtains in unmanned dispenser traveling Number of passes.
Whether step S208, mileage number are greater than mileage threshold value;It is to then follow the steps S209;Otherwise it returns to step S207;
Here judge whether mileage number is greater than mileage threshold value, (specific value should be long according to passenger compartment for such as 20 meters of mileage threshold value Degree determines),
Step S209, passenger compartment number add 1;
If judge in previous step mileage number be greater than mileage threshold value, that is to say, that unmanned dispenser continuously across Two run-through channels meet the condition of run-through channel successful match twice in succession, and mileage number has been greater than mileage threshold value, then will system The current passenger compartment number of meter adds 1, and the current passenger compartment number of more new record.Erroneous judgement when passenger compartment number statistics is avoided as a result, ensure that The accuracy of positioning.
Step S211-213 is the positioning content of the seat number rank in high-speed rail about unmanned dispenser, specifically,
Step S211, the matching of seat template image;
It is similar with images match step in the positioning of aforementioned passenger compartment number, it is the second environment figure for acquiring the second video camera here Picture is matched with seat template image, and the structure feature at seat, such as the structure of seat base are had recorded in seat template image, this Sample can determine which row seat unmanned dispenser has driven to by template image matching.
Step S212, if successful match;It is to then follow the steps S213;It is no to then follow the steps S211.
Judge second environment image and pre-stored second template image whether successful match, wherein the second Prototype drawing Seat characteristic information in passenger compartment is recorded as in.Specific images match is the prior art, and which is not described herein again.If second environment Image and pre-stored second template image, then add 1 for the current seat number of statistics.
Step S213, seating capacity add 1;
In this step, the current seat number of record is added 1, for example current seat number is 14 rows, then plus after 1, updated Current seat number is 15.Since seat number has certain rule, so in the seat for obtaining unmanned dispenser and driving to After the row number of position, specific seat number can be extrapolated according to the coding rule of row's seat every in each compartment obtained in advance, Such as 15 this seat row 06F.
Step S214, determines passenger compartment number and seat number.
According to the passenger compartment number determined in abovementioned steps S209 and seat number determining in step S213, by passenger compartment number and seat Number combination, obtain position of the unmanned dispenser in high-iron carriage.
In addition, the implementation positioned based on ambient image is additionally provided in the other embodiments of the application, such as First environment information is first environment image, and second environment information is second environment image, is obtained based on the first environment information Include: that optical character identification is carried out to the first environment image to passenger compartment number, obtains the number in the first environment image The number is compared, if the number in the first environment image by word with the number in preset passenger compartment white list It appears in the passenger compartment white list, then passenger compartment number, the white name of passenger compartment is obtained by the number in the first environment image The single passenger compartment according to the public transport numbers setting;Obtaining seat number based on the second environment information includes: pair The second environment image carries out optical character identification, obtains the number in the second environment image, by the number and in advance If seat white list in number be compared, if the number in the second environment image appears in the white name in the seat Dan Zhong then obtains seat number by the number in the second environment image, and the seat white list is according to the public transport Number setting in the seat of tool.
It is the navigator fix realized by video camera and optical character recognition technology under this mode, with previous embodiment The middle matched mode of image template is compared, and higher computing resource requirement is needed, and positioning accuracy is slightly poor.
From the foregoing, it will be observed that the navigation locating method of unmanned dispenser passes through acquisition ring in the public transport of the present embodiment Border image, and match and realized in compartment with run-through channel template image, seat base image template in pre-stored high-iron carriage The other positioning of (for compartment both ends) seat-level does not need the absolute position letter for calculating unmanned dispenser in real time Breath, has saved energy consumption.It is the navigation of unmanned dispenser by the structural information (straight line environmental structure) of interior, does not need pre- First production environment map, reduces computation complexity, high-efficient.
A technical concept, this Shen are belonged to the navigation locating method of dispenser unmanned in aforementioned public transport Please embodiment additionally provide a kind of navigation positional device of unmanned dispenser in public transport, referring to Fig. 3, the present embodiment Public transport in the navigation positional device 300 of unmanned dispenser include obtaining module 301 and navigation positioning module 302,
Module 301 is obtained, the first environment letter that the first video camera for obtaining the unmanned dispenser respectively acquires Breath and the second environment information of the second video camera acquisition;
Navigation positioning module 302 is based on the second environment for obtaining passenger compartment number based on the first environment information Information obtains seat number;Determine the unmanned dispenser in the public friendship by passenger compartment number and seat number Position in logical tool.
In one embodiment of the application, the first environment information is first environment image, the navigator fix mould Block 302 is specifically used for matching the first environment image with pre-stored first template image;If matching is consistent, The current passenger compartment number of statistics is then added 1, and updates the current passenger compartment number;Wherein, it is recorded in first template image Connect the characteristic information of the run-through channel of two adjacent sections passenger compartment.
In one embodiment of the application, the second environment information is second environment image, the navigator fix mould Block 302, specifically for matching second environment image with pre-stored second template image;If matching is consistent, The current seat number of statistics is added 1, and updates the current seat number;Wherein, visitor is recorded in second template image Seat characteristic information in compartment.
In one embodiment of the application, the first environment information is first environment image, the second environment letter Breath is second environment image, and the navigation positioning module 302 is specifically used for carrying out optical character knowledge to the first environment image Not, the number in the first environment image is obtained, the number is compared with the number in preset passenger compartment white list, If the number in the first environment image appears in the passenger compartment white list, by the number in the first environment image Word obtains passenger compartment number, and the passenger compartment white list is numbered according to the passenger compartment of the public transport and set;And to described Two ambient images carry out optical character identification, obtain the number in the second environment image, by the number and preset seat Number in the white list of position is compared, if the number in the second environment image appears in the seat white list, Seat number is then obtained by the number in the second environment image, the seat white list is according to the public transport Seat number setting.
In one embodiment of the application, the current passenger compartment number of statistics is being added 1 by the navigation positioning module 302 Before, the mileage travelled information of the odometer acquisition of the unmanned dispenser is also obtained;By the mileage travelled and mileage threshold Value compares, if the mileage travelled is greater than the mileage threshold value, current passenger compartment number is added 1.
In one embodiment of the application, the acquisition module 301 is mounted on the unmanned dispenser for obtaining Ontology in front of first video camera acquisition first environment information, and, acquisition be mounted on the unmanned dispenser Body bottom portion two sides second video camera acquisition second environment information, wherein the field angle of first video camera Greater than the field angle of second video camera.
In one embodiment of the application, if the navigation positioning module 302 is also used to the first environment image It is mismatched with pre-stored first template image, then determines the need for changing driving direction, tool according to first environment image Body includes: to carry out Hough transformation to the first environment image, obtains a plurality of candidate straight line;Geometry according to the candidate straight line The angle of straight line where the side of length, the candidate straight line and the public transport passenger compartment carries out the candidate straight line Screening obtains the candidate straight line parallel with straight line where the passageway of the passenger compartment as reference line;To the reference line Back project is carried out, the reference line is made to be parallel to the vertical direction of the first environment image;If back project There are angles for the central axes of reference line and the first environment image afterwards, then adjust the nothing according to the size of the angle The driving direction of people's dispenser;If the central axes of reference line and the first environment image after back project are not deposited In angle, then the driving direction of the unmanned dispenser is kept.
The illustration of each function performed by each module in Fig. 3 shown device illustrates, with preceding method reality The illustration explanation applied in example is consistent, no longer repeats one by one here.
The embodiment of the present application also provides a kind of unmanned dispensers, referring to fig. 4, the unmanned dispenser of the present embodiment 400 include: the first video camera 401, the second video camera 402 and processor 403,
The first environment information is sent to the place for acquiring first environment information by first video camera 401 Manage device;
The second environment information is sent to the place for acquiring second environment information by second video camera 402 Manage device;
The processor 403 is believed for obtaining passenger compartment number based on the first environment information based on the second environment Breath obtains seat number;Determine the unmanned dispenser in the public transport by passenger compartment number and seat number Position in tool.
In one embodiment of the application, the first environment information be first environment image, the processor 403, Specifically for the first environment image is matched with pre-stored first template image;It, will if matching is consistent The current passenger compartment number of statistics adds 1, and updates the current passenger compartment number;Wherein, connection is recorded in first template image The characteristic information of the run-through channel of two adjacent sections passenger compartment.
In one embodiment of the application, the second environment information be second environment image, the processor 403, Specifically for second environment image is matched with pre-stored second template image;It, will statistics if matching is consistent Current seat number add 1, and update the current seat number;Wherein, passenger compartment inner seat is recorded in second template image Position characteristic information.
In one embodiment of the application, the first environment information is first environment image, the second environment letter Breath is second environment image, and the processor 403 is specifically used for carrying out optical character identification to the first environment image, obtain The number in the first environment image is obtained, the number is compared with the number in preset passenger compartment white list, if Number in the first environment image appears in the passenger compartment white list, then by digital in the first environment image It is numbered to passenger compartment, the passenger compartment white list is numbered according to the passenger compartment of the public transport and set;To the second environment figure As carrying out optical character identification, the number in the second environment image is obtained, by the number and preset seat white list In number be compared, if the number in the second environment image appears in the seat white list, by described Number in second environment image obtains seat number, and the seat white list is numbered according to the seat of the public transport Setting.
In one embodiment of the application, the processor 403, specifically for the current passenger compartment number of statistics is added 1 Before, the mileage travelled information of the odometer acquisition of the unmanned dispenser is obtained;By the mileage travelled and mileage threshold value Compare, if the mileage travelled is greater than the mileage threshold value, current passenger compartment number is added 1.
In one embodiment of the application, before first video camera is mounted on the ontology of the unmanned dispenser Side, second video camera are mounted on the body bottom portion two sides of the unmanned dispenser, the field angle of first video camera Greater than the field angle of second video camera.
In one embodiment of the application, if the processor 403 be also used to the first environment image and in advance First template image of storage mismatches, then determines the need for changing head direction according to first environment image, specifically include: Hough transformation is carried out to the first environment image, obtains a plurality of candidate straight line;Geometrical length, institute according to the candidate straight line The angle for stating candidate straight line and straight line where the side of the public transport passenger compartment screens the candidate straight line, obtains The candidate straight line parallel to straight line where the passageway with the passenger compartment is as reference line;It is counter to the reference line to be thrown Shadow transformation, makes the reference line be parallel to the vertical direction of the first environment image;If the benchmark after back project There are angles for the central axes of straight line and the first environment image, then according to the size of the angle adjustment unmanned dispatching dress The head direction set;If angle is not present in the central axes of reference line and the first environment image after back project, Then keep the head direction of the unmanned dispenser.
In conclusion the technical solution of the embodiment of the present application is realized more under the application scenarios of the public transports such as high-speed rail High positioning accuracy, meets practical application request.And the application can be real without pre-establishing high-iron carriage environmental map Existing navigation programming, reduces computation complexity, improves location efficiency.Finally, the technical solution of the embodiment of the present application, nobody matches It send without installing high-precision IMU sensor etc. on device, low in hardware cost is suitble to large-scale promotion application.
It should be understood that
Algorithm and display be not inherently related to any certain computer, virtual bench or other equipment provided herein. Various fexible units can also be used together with teachings based herein.As described above, it constructs required by this kind of device Structure be obvious.In addition, the embodiment of the present application is also not for any particular programming language.It should be understood that can benefit The content of the embodiment of the present application described herein is realized with various programming languages, and the description done above to language-specific is In order to disclose the preferred forms of the embodiment of the present application.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the application Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each application aspect, Above in the description of the exemplary embodiment of the application, each feature of the embodiment of the present application is grouped together into individually sometimes In embodiment, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: being wanted The embodiment of the present application of protection is asked to require features more more than feature expressly recited in each claim.More precisely It says, as reflected in the following claims, application aspect is all less than single embodiment disclosed above Feature.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in the specific embodiment, wherein each power Benefit requires in itself all as the separate embodiments of the application.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is real in the application It applies within the scope of example and forms different embodiments.For example, in the following claims, implementation claimed Example it is one of any can in any combination mode come using.
The various component embodiments of the embodiment of the present application can be implemented in hardware, or in one or more processor The software module of upper operation is realized, or is implemented in a combination thereof.It will be understood by those of skill in the art that can practice The middle page performance test device realized using microprocessor or digital signal processor (DSP) according to the embodiment of the present application In some or all components some or all functions.The application is also implemented as described herein for executing Some or all device or device programs (for example, computer program and computer program product) of method.In this way The program of realization the embodiment of the present application can store on a computer-readable medium, or can have one or more letter Number form.Such signal can be downloaded from an internet website to obtain, and perhaps be provided on the carrier signal or with any Other forms provide.
Fig. 5 is the structural schematic diagram of the non-transient computer readable storage medium of the application one embodiment.The calculating Machine readable storage medium storing program for executing 500 is stored with the computer program for executing the method and step according to the embodiment of the present application, can be by The processor of unmanned dispenser is read, and when computer program is run by unmanned dispenser, leads to the unmanned dispenser Execute each step in method described above, specifically, the calculation procedure of the computer-readable recording medium storage Method shown in any of the above-described embodiment can be executed.Computer program can be compressed in a suitable form.
The embodiment of the present application is carried out it should be noted that above-described embodiment illustrates rather than the embodiment of the present application Limitation, and those skilled in the art can be designed alternative embodiment without departing from the scope of the appended claims. In the claims, any reference symbol between parentheses should not be configured to limitations on claims.Word " packet Containing " do not exclude the presence of element or step not listed in the claims.Word "a" or "an" located in front of the element is not arranged Except there are multiple such elements.The embodiment of the present application can by means of include several different elements hardware and by means of Properly programmed computer is realized.In the unit claims listing several devices, several in these devices can To be to be embodied by the same item of hardware.Word, second and the use of third etc. do not indicate any sequence, can will These words are construed to title.

Claims (10)

1. the navigation locating method of unmanned dispenser in a kind of public transport, which is characterized in that method includes:
The first environment information and the acquisition of the second video camera of the first video camera acquisition of the unmanned dispenser are obtained respectively Second environment information;
Passenger compartment number is obtained based on the first environment information, seat number is obtained based on the second environment information;
Position of the unmanned dispenser in the public transport is determined by passenger compartment number and seat number It sets.
2. navigation locating method as described in claim 1, which is characterized in that the first environment information is first environment figure Picture, it is described based on the first environment information obtain passenger compartment number include:
The first environment image is matched with pre-stored first template image;
If the first environment image matches unanimously twice with the first template image, the current passenger compartment number of statistics is added 1, And update the current passenger compartment number;
Wherein, the characteristic information of the run-through channel of connection two adjacent sections passenger compartment is recorded in first template image.
3. navigation locating method as described in claim 1, which is characterized in that the second environment information is second environment figure Picture, it is described based on the second environment information obtain seat number include:
Second environment image is matched with pre-stored second template image;
If matching is consistent, the current seat number of statistics is added 1, and update the current seat number;
Wherein, seat characteristic information in passenger compartment is recorded in second template image.
4. navigation locating method as described in claim 1, which is characterized in that the first environment information is first environment figure Picture, the second environment information are second environment image,
It is described based on the first environment information obtain passenger compartment number include:
Optical character identification is carried out to the first environment image, the number in the first environment image is obtained, by the number Word is compared with the number in preset passenger compartment white list, if the number in the first environment image appears in the visitor In the white list of compartment, then passenger compartment number is obtained by the number in the first environment image, the passenger compartment white list is according to the public affairs The passenger compartment of the vehicles numbers setting altogether;
It is described based on the second environment information obtain seat number include:
Optical character identification is carried out to the second environment image, the number in the second environment image is obtained, by the number Word is compared with the number in preset seat white list, if the number in the second environment image appears in the seat In the white list of position, then seat number is obtained by the number in the second environment image, the seat white list is according to the public affairs Setting is numbered at the seat of the vehicles altogether.
5. navigation locating method as claimed in claim 2, which is characterized in that before the current passenger compartment number of statistics is added 1, This method further include:
Obtain the mileage travelled information of the odometer acquisition of the unmanned dispenser;
It, will current visitor if the mileage travelled is greater than the mileage threshold value by the mileage travelled and mileage threshold value comparison Compartment number adds 1.
6. navigation locating method according to any one of claims 1 to 5, which is characterized in that it is described obtain respectively it is described nobody The first environment information of the first video camera acquisition of dispenser and the second environment information of the second video camera acquisition include:
The first environment information for first video camera acquisition being mounted in front of the ontology of the unmanned dispenser is obtained,
And
Obtain the second environment letter for being mounted on second video camera acquisition of the body bottom portion two sides of the unmanned dispenser Breath,
Wherein, the field angle of first video camera is greater than the field angle of second video camera.
7. navigation locating method as claimed in claim 6, which is characterized in that if the first environment image be stored in advance The first template image mismatch, then according to first environment image determine the need for change head direction, specifically include:
Hough transformation is carried out to the first environment image, obtains a plurality of candidate straight line;
It is straight where side according to the geometrical length of the candidate straight line, the candidate straight line and the public transport passenger compartment The angle of line screens the candidate straight line, obtains the candidate straight line parallel with straight line where the passageway of the passenger compartment and makees For benchmark straight line;
Back project is carried out to the reference line, the reference line is made to be parallel to the vertical side of the first environment image To;
If there are angles for the central axes of reference line and the first environment image after back project, according to the folder The size at angle adjusts the head direction of the unmanned dispenser;
If angle is not present in the central axes of reference line after back project and the first environment image, keep described in The head direction of unmanned dispenser.
8. the navigation positional device of unmanned dispenser in a kind of public transport characterized by comprising
Module is obtained, the first environment information and that the first video camera for obtaining the unmanned dispenser respectively acquires The second environment information of two video cameras acquisition;
Navigation positioning module is obtained for obtaining passenger compartment number based on the first environment information based on the second environment information It is numbered to seat;Determine the unmanned dispenser in the public transport by passenger compartment number and seat number In position.
9. a kind of unmanned dispenser, which is characterized in that the unmanned dispenser travels in the passenger compartment of public transport, It include: the first video camera, the second video camera and processor,
The first environment information is sent to the processor for acquiring first environment information by first video camera;
The second environment information is sent to the processor for acquiring second environment information by second video camera;
The processor is obtained for obtaining passenger compartment number based on the first environment information based on the second environment information Seat number;Determine the unmanned dispenser in the public transport by passenger compartment number and seat number Position.
10. a kind of non-transient computer readable storage medium, is stored thereon with computer program, which is characterized in that the calculating The step of any one of claim 1-7 the method is realized when machine program is executed by processor.
CN201910543477.7A 2019-06-21 2019-06-21 Unmanned distribution device and navigation positioning method and device thereof Active CN110308720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910543477.7A CN110308720B (en) 2019-06-21 2019-06-21 Unmanned distribution device and navigation positioning method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910543477.7A CN110308720B (en) 2019-06-21 2019-06-21 Unmanned distribution device and navigation positioning method and device thereof

Publications (2)

Publication Number Publication Date
CN110308720A true CN110308720A (en) 2019-10-08
CN110308720B CN110308720B (en) 2021-02-23

Family

ID=68077681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910543477.7A Active CN110308720B (en) 2019-06-21 2019-06-21 Unmanned distribution device and navigation positioning method and device thereof

Country Status (1)

Country Link
CN (1) CN110308720B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08147591A (en) * 1994-11-17 1996-06-07 Nec Corp Traffic flow measurement device
CN1945351A (en) * 2006-10-21 2007-04-11 中国科学院合肥物质科学研究院 Robot navigation positioning system and navigation positioning method
CN101419705A (en) * 2007-10-24 2009-04-29 深圳华为通信技术有限公司 Video camera demarcating method and device
CN202395858U (en) * 2011-12-14 2012-08-22 深圳市中控生物识别技术有限公司 Binocular photographic device
CN104142683A (en) * 2013-11-15 2014-11-12 上海快仓智能科技有限公司 Automated guided vehicle navigation method based on two-dimension code positioning
CN107918747A (en) * 2017-11-23 2018-04-17 大唐华银电力股份有限公司耒阳分公司 A kind of compartment numbering recognition methods
CN109195106A (en) * 2018-09-17 2019-01-11 北京三快在线科技有限公司 Localization method and device in train
CN109357676A (en) * 2018-10-19 2019-02-19 北京三快在线科技有限公司 The localization method and device and mobile device of a kind of mobile device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08147591A (en) * 1994-11-17 1996-06-07 Nec Corp Traffic flow measurement device
CN1945351A (en) * 2006-10-21 2007-04-11 中国科学院合肥物质科学研究院 Robot navigation positioning system and navigation positioning method
CN101419705A (en) * 2007-10-24 2009-04-29 深圳华为通信技术有限公司 Video camera demarcating method and device
CN202395858U (en) * 2011-12-14 2012-08-22 深圳市中控生物识别技术有限公司 Binocular photographic device
CN104142683A (en) * 2013-11-15 2014-11-12 上海快仓智能科技有限公司 Automated guided vehicle navigation method based on two-dimension code positioning
CN107918747A (en) * 2017-11-23 2018-04-17 大唐华银电力股份有限公司耒阳分公司 A kind of compartment numbering recognition methods
CN109195106A (en) * 2018-09-17 2019-01-11 北京三快在线科技有限公司 Localization method and device in train
CN109357676A (en) * 2018-10-19 2019-02-19 北京三快在线科技有限公司 The localization method and device and mobile device of a kind of mobile device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦敏: "基于机器视觉的车道线检测与追踪系统的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN110308720B (en) 2021-02-23

Similar Documents

Publication Publication Date Title
WO2021121306A1 (en) Visual location method and system
EP3798982A1 (en) Adjustment value calculation method
CN108230379A (en) For merging the method and apparatus of point cloud data
CN109426800B (en) Lane line detection method and device
WO2021178234A1 (en) System and method for autonomous vehicle systems simulation
EP3671623B1 (en) Method, apparatus, and computer program product for generating an overhead view of an environment from a perspective image
CN111081033B (en) Method and device for determining orientation angle of vehicle
US20200342239A1 (en) People-gathering analysis device, movement destination prediction creation device, people-gathering analysis system, vehicle, and recording medium
CN111859185A (en) Method, system and device for recommending boarding points and storage medium
CN112105892A (en) Identifying map features using motion data and bin data
US20210350142A1 (en) In-train positioning and indoor positioning
US20200072614A1 (en) Automated emergency response
CN111859179A (en) Method, system and device for recommending boarding points and storage medium
CN112598668B (en) Defect identification method and device based on three-dimensional image and electronic equipment
CN110308720A (en) A kind of unmanned dispenser and its navigation locating method, device
WO2022152081A1 (en) Navigation method and apparatus
CN107727092A (en) Information prompting method, device and electronic equipment
US20140244170A1 (en) Adaptive route proposals based on prior rides
CN112859109B (en) Unmanned aerial vehicle panoramic image processing method and device and electronic equipment
CN113469045A (en) Unmanned card-collecting visual positioning method and system, electronic equipment and storage medium
CN114648572A (en) Virtual positioning method and device and virtual positioning system
JP2021071885A (en) Area cut-out method and area cut-out program
CN114550481B (en) Information processing device, information processing system, information processing method, and storage medium
US20210207972A1 (en) Architecture recognition method and identification system
WO2024107299A1 (en) Detection of close encounters with obstacles by aerial vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant