CN107065871A - It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly - Google Patents

It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly Download PDF

Info

Publication number
CN107065871A
CN107065871A CN201710223549.0A CN201710223549A CN107065871A CN 107065871 A CN107065871 A CN 107065871A CN 201710223549 A CN201710223549 A CN 201710223549A CN 107065871 A CN107065871 A CN 107065871A
Authority
CN
China
Prior art keywords
mrow
dining car
certainly
camera
walking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710223549.0A
Other languages
Chinese (zh)
Inventor
刘立意
韩宗昌
俞强
刘潇
汪雨晴
李平安
李杨
李冰洁
刘卓然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Agricultural University
Original Assignee
Northeast Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Agricultural University filed Critical Northeast Agricultural University
Priority to CN201710223549.0A priority Critical patent/CN107065871A/en
Publication of CN107065871A publication Critical patent/CN107065871A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Image Processing (AREA)

Abstract

Alignment system and method are recognized the invention discloses a kind of dining car of walking certainly based on machine vision, including walking dining car, video image acquisition device, server host certainly, dining car is walked certainly includes car body, positioning mark, control device, driving and running gear, obstacle avoidance apparatus, client of ordering and power supply;Video image acquisition device is the array that multiple wireless cameras are constituted, for being identified and positioning to overall;The control system includes wireless transmit/receive units, single-chip microcomputer, and single-chip microcomputer carries out wireless data transmission using wireless transmit/receive units and server host, and server host is handled data.The present invention can be finally completed from walking the Global localization of dining car and determining direction of advance, is not required to dining room remaking surface by realizing of Robot Vision to walking the identification that dining car is identified certainly;It is simple to operation, complete to order while artificial fast accurate can be replaced, food delivery, the service role such as clearing, optimize service quality while raising operating efficiency.

Description

It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly
Technical field
Alignment system and method are recognized the present invention relates to a kind of dining car of walking certainly based on machine vision, belongs to machine vision mesh Mark is not and positioning field.
Background technology
With the growth technologically continued to develop with people to food and restaurant service industry demand, dining car is walked certainly and is gradually come into operation, Alleviate the workload of service staff, improve overall operating efficiency.More typical meal delivery robot is real on the market at present Border service tends not to reach requirement.A kind of such as in existing patent " the dining room path navigation of robot of Shaanxi Tech Univ System and air navigation aid " (see patent No. CN103335652A) is although self-navigation can be realized, at the beginning of needing user from robot The corresponding dining room coordinate points in beginning position start, and sequentially input the adjacent dining room coordinate points of multiple orders, are not suitable for dining room tables and chairs The situation of shift in position;The Robot sets fixed line food delivery really simultaneously, and action has limitation, inconvenient for use.Again As described in " robot service restaurants system " (see the U of patent No. CN 204229192) of great Fu Seiko Co., Ltd of Shenzhen Robot need to install magnetic track, and set tracking positioner and obstacle avoidance apparatus in main body, make single unit system cumbersome, occupy Space, Place, and Robot guide rail movement, it is excessive to dining room environmental requirement, dining room need to be transformed when using.
To solve the problem of prior art is present, the present invention provides a kind of dining car identification of walking certainly based on machine vision and positioned System and method, including dining car, video image acquisition device, server host are walked certainly, it can be achieved to determine from the overall situation for walking dining car Position, simple operation is not required to remaking surface, using in highlighted circular lamp tape identification centre coordinate and many triangular patterns mark Heart coordinate line is judged from the offset direction for walking dining car as reference vector.
The content of the invention
Dining car identification alignment system and method walked based on machine vision certainly it is an object of the invention to provide a kind of, it can be with Some problems of prior art are effectively solved, guide rail problem is especially fixedly mounted;Positioned by machine vision, reduce dining room Design cost, and be expected to solve being short of hands in modern food and beverage sevice, autonomous food delivery is realized, is expected to solve modern food and drink clothes Being short of hands in business.
To realize object above, the present invention is adopted the following technical scheme that:It is a kind of that dining car identification is walked based on machine vision certainly Alignment system, including dining car (1), video image acquisition device (2), server host (3) are walked certainly, it is characterized in that:
Described walks dining car (1), including car body (4), positioning mark (5), control device (6), driving and running gear certainly (7), obstacle avoidance apparatus (8), client of ordering (9), power supply (10).
Described video image acquisition device (2) be multiple cameras constitute array, each held in ceiling, Camera lens optical axis is vertical downwardly directed to ground, and the image overlay region of adjacent camera is more than walks dining car vehicle body image-region certainly, so as to Adjacent camera can simultaneously capture into monitoring overlay region and walk dining car certainly.
Described positioning mark (5) includes highlighted circular lamp tape identification, many triangular patterns and identified, wherein, highlighted annular Light bar mark is located at vehicle body top center;Many triangular pattern marks are located at vehicle body top nose;Described server master The image automatic identification that machine (3) is obtained according to camera array walks dining car orientation certainly.
Described driving and running gear (7), including motor driver (13), motor (14), are walked certainly for driving The motion of dining car.
Described obstacle avoidance apparatus (8) is multigroup avoiding obstacles by supersonic wave sensor, every group at least two, is respectively placed in and walks dining car certainly The front and rear and left and right sides, for detecting car body (4) peripheral obstacle.
Described control device (6) includes single-chip microcomputer (11), wireless transmit/receive units (12), and wherein single-chip microcomputer (11) passes through nothing Line Transmit-Receive Unit (12) carries out wireless data transmission and command interaction with server host (3), and is believed according to obstacle avoidance apparatus (8) Number control driving and running gear (7).
Described client of ordering (9) and server host (3) are ordered client letter by WLAN data transfer Breath is sent to server host, and completion places an order and price clearing.
Described positioning mark (5) includes red highlighted circular lamp tape identification and chequered with black and white many triangular pattern marks Know, wherein, highlighted circular lamp tape identification is located at vehicle body top center;Before many triangular pattern marks are located at the top of vehicle body End;The image automatic identification that described server host (3) is obtained according to camera array walks dining car orientation certainly.
It is described it is a kind of dining car recognition positioning method is walked based on machine vision certainly, comprise the following steps:
S1:Described camera array obtains dining room overhead view image information, server host first by brightness by The highlighted circular lamp tape identification of individual camera analysis identification;Then in the camera head monitor region that there is highlighted circular lamp tape identification In detect whether to exist many triangular patterns marks, if in the presence of obtaining many triangular patterns mark sideline candidate angulars, and count This highlighted circular lamp band centre coordinate is calculated, otherwise continues to recognize next camera head monitor regional analysis.
S2:The square structure for being a with length of side member carries out secondary demarcation to the region centered on candidate angular, it is determined that should Angle point number and judged in region, if angle point number does not meet the threshold number of setting, reject the candidate angular, managed Think angle point as walking dining car elements of a fix point certainly.
S3:Many triangular pattern mark sideline coordinates are taken, triangle center coordinate is calculated, is used as the actual bit for walking dining car certainly Put, and real-time location coordinates data are returned and preserved.
S4:Reference is used as according to highlighted circular lamp tape identification centre coordinate and many triangular patterns mark centre coordinate line Vector, obtains the fore-and-aft direction of dining car traveling.
In step sl, comprising the following steps that for candidate angular is obtained:
S11:Server host is deposited to extracting brightness by the image information of thecamera head to detecting highlighted mark Image information in region carries out X-comers feature detection.
S12:The image window in image-region that highlighted circular lamp tape identification is present moves (i, j) to any direction and produced Gray scale knots modification be:
Wherein (i, j) be image window translational movement, E (u, v) for translation after gradation of image changing value, I (u+i, v+j) be with The gray value after the translation of region centered on (i, j), I (i, j) is the gray value before translation, and W (i, j) is window function.
S13:It is by gray value I (u+i, v+j) approximate representation after translation according to Taylor series:
I(u+i,v+j)≈I(i,j)+Ix*u+Iy*v
Ix、IyGrey scale change amount in the respectively shade of gray of image both horizontally and vertically, S12 can be exchanged into:
E (u, v)=(u, v) * M* (u, v)T
Wherein,
For the autocorrelation matrix of pixel (i, j).
S14:Constructing angle point receptance function is:
CRF=Det (M)-K* (tr (M))2
Wherein Det (M)=AB-C2, tr (M)=A+B.
When CRF exceedes a certain threshold value T, it is candidate angular to decide that the point.
S15:The monitor area that there is candidate angular is then determined as from the monitoring camera region walked where dining car;When high bright ring Shape light bar mark enter overlapping monitor area when, to that should have multiple cameras while obtaining highlight information, into 1., 2., 3., 4. in monitor area, 1. acquiescence is preferential display camera, into 3., 4., 5., 6. in monitor area, acquiescence is 3. to be preferential aobvious Show camera;If identification characteristics leave current camera head monitor region, the complete another camera of identification characteristics is set as Preferential display camera.
In step s 2, candidate angular is screened for a square structure member using the length of side, cancelling noise angle point;Specifically Step is as follows:The candidate angular that structural elements frame is selected carries out local judgement and serialization, if meeting
Then the candidate angular is the preferable angle point needed;Wherein, N(i)(x, y) be using the coordinate position of ith pixel point as The number of (x, y) as candidate angular in the pixel region of template center;D(i)X, D(i+1)XRespectively i-th, i+1 candidate angle The pixel coordinate value of point in a lateral direction, D(i)Y, D(i+1)YRespectively i-th, the picture of i+1 candidate angular in a longitudinal direction Plain coordinate value;M takes 7~9.
In step s3, using the camera array lower left corner as coordinate origin, 2n-1 camera views region is2n camera views region is It is determined that walk the monitoring camera of dining car certainly, using walking mark coordinate of the dining car in monitoring camera image-region certainly, and it is above-mentioned Camera head monitor area of visual field, it is determined that walking the world coordinates of dining car certainly.
The remarkable result of the present invention is to realize the identification and positioning to walking dining car certainly using video image acquisition device, is led to Cross server host and carry out wireless data transmission, client of ordering completes command interaction, and finally walking dining car certainly can reach without rail Mark, the beneficial effect of self-navigation, fast accurate complete service role.Operation is succinct, largely improves integrity service matter Amount.
Brief description of the drawings
The dining car of walking certainly that Fig. 1 is the present invention recognizes positioning system structure block diagram;
Fig. 2 is the monitoring area schematic of camera array;
Fig. 3 is the flow chart for walking dining car recognition positioning method certainly based on many triangular pattern identification characteristics;
Fig. 4 is the device distribution map of the present invention.
Fig. 5 is the schematic diagram of many triangular pattern marks of identification.
Embodiment
Illustrate the present invention below in conjunction with example, but be not limited to the scope of the present invention:
In present example, using restaurant service industry as background, it is proposed that a kind of to walk dining car knowledge certainly based on machine vision Other alignment system and method, the array constituted using multiple cameras, each held is in ceiling, and camera lens optical axis is vertical Downwardly directed ground, the image overlay region of adjacent camera is more than from dining car vehicle body image-region is walked, so as to adjacent camera energy Capture simultaneously into monitoring overlay region and walk dining car certainly.By angle lenses and the computational methods of distance, determine that camera irradiates Field range, according to formula:
It is respectively W, L to obtain field of view length and visual field width;
Wherein, f is wireless camera lens focus;
H is that camera lens is hung away from ground level;
L is the re-imaging length on CCD target surfaces region;
W is the imaging width on CCD target surfaces region;
The field range irradiated according to camera, it is former by coordinate system of No. 2 camera head monitor regions lower-left most wide position of angular field of view Point, 2,4, No. 6 shooting head suspension directions be horizontal square to, 1,3, No. 5 shooting head suspension directions be vertical positive direction, to make From walk dining car identification characteristics can completely into camera overlapping monitor area, in camera array between each camera Respectively using horizontal direction spacing distance as HI,Vertical direction spacing distance is VI,Arrangement.
In embodiments of the present invention, a kind of dining car of walking certainly based on machine vision recognizes alignment system, including:From walk meal Car, video image acquisition device, server host;Dining car will be walked certainly is divided into car body, positioning mark, control device, driving and row Walking system, obstacle avoidance apparatus, client of ordering, power supply;Positioning mark includes red highlighted circular lamp tape identification and chequered with black and white Many triangular pattern marks;Wherein, highlighted circular lamp tape identification is the highlighted emitting red light light bar of 5050 type LED, is fixed on vehicle body Top center, light-emitting component length is 5.0mm, and width is 5.0mm, and every meter of 30 LED lamps are 0.3 meter using length, Operating voltage is 12V, and operating current is 1.2A;Many triangular pattern marks are located at vehicle body top nose;Detection essence can be improved Degree, strengthens environmental disturbances;Control device is divided into wireless transmit/receive units, single-chip microcomputer, and wireless transmit/receive units enter with server host Row wireless data transmission and command interaction, and driving and running gear are controlled according to obstacle avoidance apparatus signal;Driving and running gear It is divided into motor driver and motor, motor is stepper motor, is fixedly mounted on the inside of driving wheel, its output shaft passes through Shaft coupling is connected with driving wheel;Motor driver using receive walk signal control stepper motor rotate, walk signal by Server host is transmitted by wireless transmit/receive units.
In present example, the multiple camera utilizes wireless communication protocol, and dining room inner radiation zone domain is sent into clothes The management terminal of business device main frame, realizes monitoring in real time.
In present example, described wireless transmit/receive units use USB wireless serials module and RS232 wireless receiving and dispatching moulds Block, USB wireless serial modules operating voltage is 5V, and visual transmission range is 300-1800 meters, and carrier frequency is 433MHz, there is superpower Penetration power, is connected with server host, carries out transmission of wireless signals;RS232 radio receiving transmitting modules interface module connects with single-chip microcomputer Connect, transmission power is 100MW, instantaneous transmission electric current is 100mA, and reception electric current is 35mA, and operating voltage is 5V.
In present example, described obstacle avoidance apparatus includes eight high-precision ultrasonic avoidance sensors, two ultrasonic waves Avoidance sensor is placed in from dining car front portion is walked, and two avoiding obstacles by supersonic wave sensors are placed in and walk dining car rear portion certainly, two ultrasonic avoidances Wave sensor is placed in from dining car right side is walked, and two avoiding obstacles by supersonic wave sensors are placed on the left of dining car;Avoiding obstacles by supersonic wave is sensed Device uses KS103 ultrasonic modules, and the ultrasonic wave module includes real-time temperature compensation, can be while compensation temperature and light intensity, detection Precision is high, and performance is stable, and blind area is 1cm, and finding range is 1cm-8m, and operating voltage is 3.0V-5.5V;Its pin has:VCC- Supply voltage, GND- ground wires, SCL- receiving terminals, SDA- transmitting terminals;When ultrasonic sensor acquisition barrier is bordering on 0.5 meter, The obstacle information detected is sent to single-chip microcomputer, single-chip microcomputer sends avoidance signal to motor, and control is walked dining car certainly and hided Obstacle avoidance completes food delivery;Single-chip microcomputer is by wireless transmit/receive units, to server host transmission range signal, disturbance in judgement thing letter Breath.
In present example, computer receives the image information of thecamera head by wireless signal receiver, to figure As the brightness of information is recognized, highlighted circular lamp tape identification is detected, is comprised the following steps that:
RGB color model is converted into YCbCr color model by the image information that computer is received according to equation below:
Wherein Y is that the luminance, Cb and Cr of color are then the concentration excursion amount composition of blue information and red information;
The Y value of each pixel in image information is judged successively, and setting fixed threshold is interval, is output into bianry image, passes through Area features are extracted in morphological images processing, are judged to walk area from walking dining car according to morphology size given threshold Domain, if certain camera head monitor region meets the morphology size of highlighted circular lamp band, brightness mark is present, correspondence prison The camera collection image in control region carries out many triangular patterns mark Corner Features and extracted, and the region that there is angle point is to walk certainly The monitor area of dining car, and extract the center-of-mass coordinate of highlighted circular lamp band;If it is special that the monitor area is unsatisfactory for highlighted morphology Levy, then continue to recognize next camera head monitor regional analysis.
In present example, camera returns to monitor area image, and server host utilizes above-mentioned YCbCr colors mould Type, to extracting red colored lamp band brightness by the image information of thecamera head, chooses appropriate threshold and is accurately extracted;Clothes The highlighted feature that device host computer of being engaged in is extracted, determines the corresponding monitoring camera of dining car, monitor area correspondence where dining car is walked certainly Camera to image information carry out Corner Feature extraction;If highlighted mark enters overlapping monitor area, the multiple shootings of correspondence Head obtains the image information of mark target simultaneously, and server host calculates many triangular pattern mark angle points in image information Number, the camera that there is the image information mapping of more angle point number is preferential display camera;If segment angle point identification Feature leaves No. n-th camera head monitor region, then another camera that identification characteristics are fully present is set as into preferential display is taken the photograph As head.
In present example, described many triangular patterns mark uses black background color, its by the length of side is 9cm 9 Chequered with black and white equilateral triangle is spliced, and the lattice angle point at chequered with black and white position is obtained after many triangular pattern mark demarcation, Each angle point is connected, and obtains sideline profile;The average value and vertical seat of the triangular pattern mark angle point abscissa completed will be demarcated Target average value is as triangle center coordinate, as the physical location for walking dining car certainly, and real-time location coordinates data are returned And preserve;Triangular pattern identifies angle point as the characteristic point in identification procedure, is that brightness change is violent in two dimensional image Point;The characteristic point of image can be determined in the environment of dining room using these angle points, these points are retaining the important spy of image graphics While levying, the data volume of information can be efficiently reduced.
In present example, pattern is obtained every setting time in visual processes, if segment angle point identification feature leaves n-th Another camera that identification characteristics are fully present, then be set as preferentially showing camera by number camera head monitor region;Computer Image information is converted into two-dimensional plane coordinate system, Harris Corner Detection Algorithms will be carried out per frame image information and obtain chessboard The position of lattice angle point;The position process for being derived from away dining car using Harris X-comers detection algorithms is as follows:
The coordinate of certain point in the two-dimentional pattern of camera device acquisition is set as (x, y), image window translational movement is (u, v), production Raw grey scale change is E (u, v),
E (u, v)=sum [w (x, y) * [l (x+u, y+v)-l (x, y)] ^2] (1)
Wherein w (x, y) is Gaussian smoothing window function, isL (x+u, y+v) is the gray scale after translation Value, l (x, y) is the gray value before translation.
It can be obtained by Taylor's formula expansion:
L (x+u, y+v)=l (x, y)+lx*u+ly*v+ ο (u^2, v^2) (2)
Wherein lx, ly are respectively partial differential, are in the picture the directional derivative of image.
Bringing (2) formula into (1) formula can obtain following:
E (u, v)=sum [w (x, y) * [lx*u+ly*v+ ο (u^2, v^2)] ^2],
Approximately obtain E (u, v)=sum [w (x, y) * [lx*u+ly*v] ^2] (3)
Formula (3) is write as to the form of matrix multiple, obtained:
E (u, v)=(u, v) * M* (u, v)T
Wherein M=[lx^2, lx*ly;lx*ly,ly^2] (4)
Trace (M) be matrix M diagonal line value, Det (M) be matrix M determinant, angle point decision content be R (x, y)= Det (M)-k*Trace (M), when angle point decision content R (x, y) exceed setting threshold value P, you can judge the point (x, y) level, Gray-value variation on vertical direction is larger, and it is chessboard candidate angular to determine the pixel.
In present example, the candidate angular part of acquisition is noise spot, using each angle point as inspection center, sets pixel The length of side is 6 square templates, and secondary demarcation is carried out to the candidate angular that every two field picture is obtained.Judge:
Whether set up;If so, then the candidate angular is the marker angle point needed;If it is not, then candidate angular cannot function as Computer identity;
Wherein,
N(i)(u, v) is as candidate angular in the pixel region of template center using the coordinate position of ith pixel point as (u, v) Number;
D(i)X, D(i+1)XRespectively i-th, the pixel coordinate value of i+1 candidate angular in a lateral direction, D(i)Y, D(i+1)YRespectively For the i-th, pixel coordinate value of i+1 candidate angular in a longitudinal direction;
M takes 7~9;
When the square pixel block size interior angle points that detection meets setting are more than 3, retain angle point, enter one by one by this principle Row pixel detection, all pixels met are shown and its coordinate is returned, and obtain carrying out the chessboard mark after pixel limitation Know the angle point of thing, be that the position of reference point during as dining car food delivery and dining car is determined a little.
In present example, connected according to highlighted circular lamp tape identification centre coordinate and many triangular patterns mark centre coordinate Line is as reference vector, and many triangular patterns mark sideline profiles are advanced front, highlighted circular lamp tape identification as walking dining car certainly As dining car traveling rear is walked certainly, the angle that the detection of the reference vector of previous frame image and latter two field picture is vectorial is to walk certainly The direct of travel of dining car.
In present example, the FTP client FTP of ordering carries out Intranet mapping in server host, is assisted using TCP communication View, outer net IP address and port numbers after client connection mapping, accesses main frame;Client of ordering passes through communication Management terminal of the information transmission that will order to main frame, completes vegetable and places an order and price clearing.
In present example, using the camera array lower left corner as coordinate origin, 2n-1 camera views region is
2n camera views region is:
It is determined that walk the monitoring camera of dining car certainly, using walking mark coordinate of the dining car in monitoring camera image-region certainly, and Above-mentioned camera head monitor area of visual field, it is determined that walking the world coordinates of dining car certainly.

Claims (7)

1. a kind of walk dining car identification alignment system and method certainly based on machine vision, including dining car (1), video image are walked certainly obtain Device (2), server host (3) are taken, it is characterized in that:
It is described to walk dining car (1) certainly, including car body (4), positioning mark (5), control device (6), driving and running gear (7), Obstacle avoidance apparatus (8), client of ordering (9), power supply (10);
Described video image acquisition device (2) is the array that multiple cameras are constituted, and each held is in ceiling, camera lens Optical axis is vertical downwardly directed to ground, and the image overlay region of adjacent camera is more than from dining car vehicle body image-region is walked, so as to adjacent Camera can simultaneously capture into monitoring overlay region and walk dining car certainly.
2. a kind of dining car of walking certainly based on machine vision according to claim 1 recognizes alignment system, it is characterized in that described Positioning mark (5) includes red highlighted circular lamp tape identification and chequered with black and white many triangular patterns mark, wherein, highlighted annular Light bar mark is located at vehicle body top center;Many triangular pattern marks are located at vehicle body top nose;Described server master The image automatic identification that machine (3) is obtained according to camera array walks dining car orientation certainly.
3. a kind of dining car of walking certainly based on machine vision according to claim 1 recognizes alignment system, it is characterized in that described Driving and running gear (7), including motor driver (13), motor (14), for driving from the motion for walking dining car;
The obstacle avoidance apparatus (8) is at least four groups avoiding obstacles by supersonic wave sensors, every group at least two, is respectively placed in and walks certainly before dining car Afterwards and the left and right sides, for detecting car body (4) peripheral obstacle;
The control device (6) includes single-chip microcomputer (11), wireless transmit/receive units (12), and wherein single-chip microcomputer (11) passes through wireless receiving and dispatching Unit (12) carries out wireless data transmission and command interaction with server host (3), and according to ultrasonic obstacle avoidance apparatus (8) signal control System driving and running gear (7);
The client of ordering (9) and server host (3) are ordered client information transmission by WLAN data transfer To server host, completion places an order and price clearing.
4. it is a kind of from the method for walking dining car identification positioning, meal is walked based on a kind of described in claim 1 certainly based on machine vision Car recognizes alignment system, it is characterised in that comprise the following steps:
S1:Described camera array obtains dining room overhead view image information, and server host is taken the photograph one by one by brightness first As the highlighted circular lamp tape identification of head analysis identification;Then detected in the camera head monitor region that there is highlighted circular lamp tape identification With the presence or absence of many triangular patterns mark, if in the presence of, many triangular pattern mark sideline candidate angulars are obtained, and it is highlighted to calculate this Circular lamp band centre coordinate, otherwise continues to recognize next camera head monitor regional analysis;
S2:Secondary discrimination is carried out to region for a square structure member with the length of side centered on candidate angular, determined in the region Angle point number, if angle point number does not meet the threshold number of setting, rejects the candidate angular, is met the desirable angle of threshold number Point is used as walks dining car elements of a fix point certainly;
S3:Many triangular pattern mark sideline angular coordinates are taken, triangle center coordinate is calculated, is used as the actual bit for walking dining car certainly Put, and real-time location coordinates data are returned and preserved;
S4:Many triangular patterns mark is taken to advance front as walking dining car certainly, highlighted circular lamp tape identification as walking dining car row certainly Enter rear, according to highlighted circular lamp tape identification centre coordinate and many triangular patterns identify centre coordinate line as with reference to Amount, judges from the offset direction for walking dining car.
5. method according to claim 4, it is characterised in that in step sl, obtains the specific steps of candidate angular such as Under:
S11:There is area to detecting highlighted mark to extracting brightness by the image information of thecamera head in server host Image information in domain carries out X-comers feature detection;
S12:The gray scale that the image window in image-region that highlighted light bar mark is present moves (i, j) generation to any direction changes Variable is:
<mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>W</mi> </mrow> </munder> <mi>W</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mo>(</mo> <mi>u</mi> <mo>+</mo> <mi>i</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mi>j</mi> <mo>)</mo> <mo>-</mo> <mi>I</mi> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow>
Wherein (i, j) be image window translational movement, E (u, v) for translation after gradation of image changing value, I (u+i, v+j) be with The gray value after the translation of region centered on (i, j), I (i, j) is the gray value before translation, and W (i, j) is window function;
S13:It is by gray value I (u+i, v+j) approximate representation after translation according to Taylor series:
I(u+i,v+j)≈I(i,j)+Ix*u+Iy*v
Ix、IyGrey scale change amount in the respectively shade of gray of image both horizontally and vertically, S12 can be exchanged into:
E (u, v)=(u, v) * M* (u, v)T
Wherein,For the autocorrelation matrix of pixel (i, j);
S14:Constructing angle point receptance function is:
CRF=Det (M)-K* (tr (M))2
Wherein Det (M)=AB-C2, tr (M)=A+B
When CRF exceedes a certain threshold value T, it is candidate angular to decide that the point;
S15:The monitor area that there is candidate angular is then determined as from the monitoring camera region walked where dining car;When highlighted light bar mark Know when entering overlapping monitor area, to that should have multiple cameras while obtaining highlight information, into 1., 2., 3., 4. monitor area Interior, 1. acquiescence is preferential display camera, into 3., 4., 5., 6. in monitor area, 3. acquiescence is preferential display camera;If Identification characteristics leave current camera head monitor region, then the complete another camera of identification characteristics are set as into preferential display is imaged Head.
6. it is according to claim 4, it is characterised in that in step s 2, first to candidate for a square structure using the length of side Angle point is screened, and is comprised the following steps that:The candidate angular that structural elements frame is selected carries out local judgement and serialization, if meeting:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msub> <mi>N</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msub> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;le;</mo> <mi>m</mi> </mtd> </mtr> <mtr> <mtd> <msub> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mi>X</mi> </mrow> </msub> <mo>-</mo> <mi>a</mi> <mo>&lt;</mo> <msub> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> <mi>X</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mi>X</mi> </mrow> </msub> <mo>+</mo> <mi>a</mi> </mtd> </mtr> <mtr> <mtd> <msub> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mi>Y</mi> </mrow> </msub> <mo>-</mo> <mi>a</mi> <mo>&lt;</mo> <msub> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> <mi>Y</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mi>Y</mi> </mrow> </msub> <mo>+</mo> <mi>a</mi> </mtd> </mtr> </mtable> </mfenced>
Then the candidate angular is the preferable angle point needed;Wherein, N(i)(x, y) be using the coordinate position of ith pixel point as The number of (x, y) as candidate angular in the pixel region of template center;D(i)X, D(i+1)XRespectively i-th, i+1 candidate angular Pixel coordinate value in a lateral direction, D(i)Y, D(i+1)YRespectively i-th, the pixel of i+1 candidate angular in a longitudinal direction Coordinate value;M takes 7~9.
7. according to claim 4, it is characterised in that in step s3, using the camera array lower left corner as coordinate origin, 2n-1 camera views region is:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>x</mi> <mo>(</mo> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>(</mo> <mi>W</mi> <mo>-</mo> <mi>H</mi> <mi>I</mi> <mo>)</mo> <mo>,</mo> <mi>n</mi> <mi>W</mi> <mo>-</mo> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mi>H</mi> <mi>I</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> <mo>(</mo> <mi>L</mi> <mo>-</mo> <mi>V</mi> <mi>I</mi> <mo>,</mo> <mn>2</mn> <mi>L</mi> <mo>-</mo> <mi>V</mi> <mi>I</mi> <mo>)</mo> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
2n camera views region is:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>x</mi> <mo>(</mo> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>(</mo> <mi>W</mi> <mo>-</mo> <mi>H</mi> <mi>I</mi> <mo>)</mo> <mo>,</mo> <mi>n</mi> <mi>W</mi> <mo>-</mo> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mi>H</mi> <mi>I</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mo>-</mo> <mi>V</mi> <mi>I</mi> <mo>)</mo> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
It is determined that walk the monitoring camera of dining car certainly, using walking mark coordinate of the dining car in monitoring camera image-region certainly, and Above-mentioned camera head monitor area of visual field, it is determined that walking the world coordinates of dining car certainly.
CN201710223549.0A 2017-04-07 2017-04-07 It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly Pending CN107065871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710223549.0A CN107065871A (en) 2017-04-07 2017-04-07 It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710223549.0A CN107065871A (en) 2017-04-07 2017-04-07 It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly

Publications (1)

Publication Number Publication Date
CN107065871A true CN107065871A (en) 2017-08-18

Family

ID=59603318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710223549.0A Pending CN107065871A (en) 2017-04-07 2017-04-07 It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly

Country Status (1)

Country Link
CN (1) CN107065871A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540540A (en) * 2018-03-22 2018-09-14 杭州乐航科技有限公司 A kind of Intelligent internet of things positioning of passenger plane dining car and management system for monitoring and method
CN109634268A (en) * 2017-10-09 2019-04-16 北京瑞悟科技有限公司 A kind of intelligent restaurant service robot
CN109633662A (en) * 2018-12-28 2019-04-16 百度在线网络技术(北京)有限公司 Barrier localization method, device and terminal
CN110069065A (en) * 2019-04-24 2019-07-30 合肥柯金自动化科技股份有限公司 A kind of AGV website positioning system based on laser navigation and picture recognition
CN110605723A (en) * 2019-05-20 2019-12-24 江西理工大学 Distributed system embedded robot with highly integrated modular design
CN111504270A (en) * 2020-06-16 2020-08-07 常州市盈能电气有限公司 Robot positioning device
CN112406967A (en) * 2019-08-23 2021-02-26 盟立自动化股份有限公司 Railcar system, railcar, and visual sensing device
CN112533340A (en) * 2020-12-01 2021-03-19 南京苏美达智能技术有限公司 Light control method of self-walking equipment and self-walking equipment
CN112614181A (en) * 2020-12-01 2021-04-06 深圳乐动机器人有限公司 Robot positioning method and device based on highlight target

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07160330A (en) * 1993-12-01 1995-06-23 Fuji Heavy Ind Ltd Position self-detecting method for autonomous traveling working vehicle
CN101436037A (en) * 2008-11-28 2009-05-20 深圳先进技术研究院 Dining room service robot system
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN203745904U (en) * 2014-02-27 2014-07-30 梁学坚 Restaurant service robot system
CN105467994A (en) * 2015-11-27 2016-04-06 长春诺惟拉智能科技有限责任公司 Vision and ranging fusion-based food delivery robot indoor positioning system and positioning method
CN105629969A (en) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 Restaurant service robot
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN105865471A (en) * 2016-04-01 2016-08-17 深圳安迪尔智能技术有限公司 Robot navigation method and navigation robot
CN207008404U (en) * 2017-04-07 2018-02-13 东北农业大学 It is a kind of that dining car identification alignment system is walked based on machine vision certainly

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07160330A (en) * 1993-12-01 1995-06-23 Fuji Heavy Ind Ltd Position self-detecting method for autonomous traveling working vehicle
CN101436037A (en) * 2008-11-28 2009-05-20 深圳先进技术研究院 Dining room service robot system
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN203745904U (en) * 2014-02-27 2014-07-30 梁学坚 Restaurant service robot system
CN105629969A (en) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 Restaurant service robot
CN105467994A (en) * 2015-11-27 2016-04-06 长春诺惟拉智能科技有限责任公司 Vision and ranging fusion-based food delivery robot indoor positioning system and positioning method
CN105865471A (en) * 2016-04-01 2016-08-17 深圳安迪尔智能技术有限公司 Robot navigation method and navigation robot
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN207008404U (en) * 2017-04-07 2018-02-13 东北农业大学 It is a kind of that dining car identification alignment system is walked based on machine vision certainly

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634268A (en) * 2017-10-09 2019-04-16 北京瑞悟科技有限公司 A kind of intelligent restaurant service robot
CN109634268B (en) * 2017-10-09 2024-05-28 北京瑞悟科技有限公司 Intelligent restaurant service robot
CN108540540A (en) * 2018-03-22 2018-09-14 杭州乐航科技有限公司 A kind of Intelligent internet of things positioning of passenger plane dining car and management system for monitoring and method
CN109633662A (en) * 2018-12-28 2019-04-16 百度在线网络技术(北京)有限公司 Barrier localization method, device and terminal
US11532166B2 (en) 2018-12-28 2022-12-20 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Obstacle positioning method, device and terminal
CN110069065B (en) * 2019-04-24 2022-05-27 合肥柯金自动化科技股份有限公司 AGV website positioning system based on laser navigation and picture discernment
CN110069065A (en) * 2019-04-24 2019-07-30 合肥柯金自动化科技股份有限公司 A kind of AGV website positioning system based on laser navigation and picture recognition
CN110605723A (en) * 2019-05-20 2019-12-24 江西理工大学 Distributed system embedded robot with highly integrated modular design
CN112406967A (en) * 2019-08-23 2021-02-26 盟立自动化股份有限公司 Railcar system, railcar, and visual sensing device
CN112406967B (en) * 2019-08-23 2023-01-20 盟立自动化股份有限公司 Railcar system, railcar, and visual sensing device
US11623674B2 (en) 2019-08-23 2023-04-11 Mirle Automation Corporation Rail vehicle system, rail vehicle, and visual sensing device
CN111504270A (en) * 2020-06-16 2020-08-07 常州市盈能电气有限公司 Robot positioning device
CN112533340A (en) * 2020-12-01 2021-03-19 南京苏美达智能技术有限公司 Light control method of self-walking equipment and self-walking equipment
CN112614181A (en) * 2020-12-01 2021-04-06 深圳乐动机器人有限公司 Robot positioning method and device based on highlight target
CN112614181B (en) * 2020-12-01 2024-03-22 深圳乐动机器人股份有限公司 Robot positioning method and device based on highlight target

Similar Documents

Publication Publication Date Title
CN107065871A (en) It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly
CN110297498B (en) Track inspection method and system based on wireless charging unmanned aerial vehicle
CN101537618B (en) Visual system for ball picking robot in stadium
CN110197589B (en) Deep learning-based red light violation detection method
CN101430195B (en) Method for computing electric power line ice-covering thickness by using video image processing technology
CN108055501A (en) A kind of target detection and the video monitoring system and method for tracking
CN102663760B (en) Location and segmentation method for windshield area of vehicle in images
CN105512628A (en) Vehicle environment sensing system and method based on unmanned plane
CN102682292A (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN113223035B (en) Intelligent inspection system for cage chickens
CN106527426A (en) Indoor multi-target track planning system and method
CN105447853A (en) Flight device, flight control system and flight control method
CN111460903B (en) System and method for monitoring growth of field broccoli based on deep learning
CN102592288B (en) Method for matching pursuit of pedestrian target under illumination environment change condition
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN206322194U (en) A kind of anti-fraud face identification system based on 3-D scanning
CN102622895A (en) Video-based vehicle speed detecting method
CN110334625A (en) A kind of parking stall visual identifying system and its recognition methods towards automatic parking
CN107578012A (en) A kind of drive assist system based on clustering algorithm selection sensitizing range
CN104662560B (en) A kind of method of video image processing and system
CN109949593A (en) A kind of traffic lights recognition methods and system based on crossing priori knowledge
CN102879404B (en) System for automatically detecting medical capsule defects in industrial structure scene
CN209357081U (en) Information collecting device and image processing apparatus, self-service cashier, self-service shop or the unmanned convenience store for applying it
CN109584258A (en) Meadow Boundary Recognition method and the intelligent mowing-apparatus for applying it
CN105303844A (en) Night highway agglomerate fog automatic detection device on the basis of laser and detection method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170818

WD01 Invention patent application deemed withdrawn after publication