CN110154053A - A kind of indoor explanation robot and its explanation method based on OCR - Google Patents

A kind of indoor explanation robot and its explanation method based on OCR Download PDF

Info

Publication number
CN110154053A
CN110154053A CN201910485389.6A CN201910485389A CN110154053A CN 110154053 A CN110154053 A CN 110154053A CN 201910485389 A CN201910485389 A CN 201910485389A CN 110154053 A CN110154053 A CN 110154053A
Authority
CN
China
Prior art keywords
robot
explanation
exhibition position
tourist
ocr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910485389.6A
Other languages
Chinese (zh)
Inventor
刘淑华
陈俊宇
孔文玉
张梦宇
白晓颖
徐会昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Northeast Normal University
Original Assignee
Northeast Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Normal University filed Critical Northeast Normal University
Priority to CN201910485389.6A priority Critical patent/CN110154053A/en
Publication of CN110154053A publication Critical patent/CN110154053A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The indoor explanation robot and its explanation method that the invention discloses a kind of based on OCR, including ultrasonic sensor, infrared sensor, obliquity sensor, OCR identification module, audio output module, Audio Input Modules, mobile execution module, data memory module and central control unit.The present invention enhances the intelligence of robot using image procossing and text recognition technique, it presents briefly and succinctly two levels by slightly making peace and is explained, the tourist for introducing text and wanting wholwe-hearted ornamental showpiece for being inconvenient to read showpiece is helped to understand showpiece, it is more more accurate than the tour guide APP at this stage based on GPS under environment indoors;It is more relatively sanitary than automatic briefing earphone, convenient and lively;Robot can inquire whether tourist wants a step to understand being discussed in detail for the showpiece after slightly saying showpiece, be presented briefly and succinctly if necessary, and this selectable explanation mode meets the demand of different tourists, more humanized;The human resources for saving tour guide industry to a certain extent, reduce human cost.

Description

A kind of indoor explanation robot and its explanation method based on OCR
Technical field
The invention belongs to intelligent robot technology fields, and in particular to a kind of indoor explanation robot based on OCR and its Explanation method.
Background technique
The domestic robot having at present for indoor tour guide, but intelligence degree is not also high.With science and technology it is continuous Progressive and robot technology continuous development, intelligent robot gradually enter into huge numbers of families, also occur many intelligence in the market Energy robot offers convenience to people's lives and enjoyment, wherein the one kind of interaction robot as intelligent robot, Neng Gouhe People's interaction adds many enjoyment to people's lives, the especially life to old man or child.With social productive forces More and more times are used to lie fallow and entertain by development and improvement of living standard, people, to greatly facilitate tourism The development of industry.In recent years, going sight-seeing the cultural Meccas such as museum, former residences of celebrities becomes a kind of fashion, tourist's increasing number.In order to obtain Obtain more preferably visit experience, it usually needs guide introduces various articles, traces to tourist and carries out to relevant history story Explanation.Since tourist is large number of, it need to enter in batches, therefore guide must complete repeated explanation work, waste Time and manpower, increase job costs.Simultaneously because the memory of human brain is limited, the solution that guide sometimes cannot be clear and complete Answer the problem of tourist proposes.
A kind of guidance method of intelligent guide robot, but this tour guide are disclosed in CN103699126 in the prior art Robot mainly realizes navigator fix according to WIFI, is explained by identification article, but this guide robot scene mode It is very limited, since WIFI layout can be related to a large amount of site, when facing large scene mode, needs to put into a large amount of Distribution, when cannot achieve and precisely identify and position in face of a network environment is bad or the lesser scene of range, from And it is unable to complete explanation work.
The indoor explanation robot and its explanation method that therefore, it is necessary to a kind of based on OCR.
Summary of the invention
The indoor explanation robot and its explanation method that it is an object of that present invention to provide a kind of based on OCR, to solve at present Robot for indoor tour guide in face of a network environment is bad or the lesser scene of range, cannot achieve precisely identification and Positioning, thus the problem of being unable to complete explanation work.
To achieve the above object, the technical scheme adopted by the invention is that:
A kind of indoor explanation robot based on OCR, including ultrasonic sensor, infrared sensor, obliquity sensor, OCR identification module, audio output module, Audio Input Modules, mobile execution module, data memory module and center control are single Member, the central control unit respectively with the ultrasonic sensor, infrared sensor, obliquity sensor, OCR identification module, Audio output module, Audio Input Modules, mobile execution module are connected with data memory module;It is the ultrasonic sensor, red Outer sensor and obliquity sensor are used cooperatively, for explaining the distance between robot acquisition and exhibition position and establishing semantically Figure;The OCR identification module text information on exhibition position for identification;The Audio Input Modules are for explaining robot reception And identifying audio-frequency information, the audio output module is described for explaining the explanation audio-frequency information that robot output phase answers exhibition position Mobile execution module completes corresponding movement for explaining robot, and the data memory module is for storing explanation institute, robot The exhibition position content information and explanation robot information collected for needing to explain;The central control unit is for entirely controlling Explain the work of robot.
Preferably, the OCR identification module uses Tesseract-OCR engine.
Preferably, the central control unit uses the SCM system of ARM core processor.
It preferably, further include camera, the camera is connect with the central control unit.
It preferably, further include wireless communication module, the wireless communication module is connect with the central control unit.
A kind of explanation method of the indoor explanation robot based on OCR, comprising the following steps:
S1: interpreter is generated by the data that ultrasonic sensor, infrared sensor, camera and obliquity sensor acquire Device people is responsible for the indoor exhibition position environment semanteme map of explanation, and stores into the data memory module of explanation robot;
S2: explanation robot inquires the visit mode of tourist by audio output module and is received by Audio Input Modules Tourist's instruction, if tourist's selecting sequence is visited, explanation robot leads tourist's sequence to visit, if tourist specifies exhibition position to visit, Explanation robot then plans that a paths lead tourist to specified exhibition position;
S3: explanation robot identifies current exhibition position information by OCR identification module, and is tourist by audio output module Slightly say the summary content of current exhibition position;Position correction is carried out according to exhibition position information simultaneously;S4: explanation robot passes through audio output Whether module queries tourist is it should be understood that the detailed content of current exhibition position and receive tourist's instruction by Audio Input Modules, if swimming Visitor is it should be understood that currently the detailed content of exhibition position, explanation robot then call the detailed of the current exhibition position stored in data memory module Thin content enters step S5 if tourist is not required to the detailed content it is to be understood that current exhibition position;
S5: explanation robot inquires whether tourist terminates to visit and connect by Audio Input Modules by audio output module Tourist's instruction is received, if tourist terminates to visit, explanation robot terminates this tour guide, if tourist does not terminate to visit, is back to Step S2.
Preferably, step S1 further includes sub-step:
S101: progress border detection first;Grating map is initialized as 0, starting setting is in explanation robot or so two Sonar and the start boundary detection of side, read corresponding sonar value;If left sound value and right sound value are respectively less than 0.35 meter, before showing There is barrier in side, and explanation robot turns left or turns right according to inward turning method algorithm, otherwise explains robot straight trip;Complete border detection Afterwards, then it is ready for exhibition position detection;
S102: after explaining robot completion border detection, beginning to approach exhibition position, when explanation robot is equal to apart from exhibition position Or its sonar will be emitted when less than 0.5m and read actual distance value, according to this distance value, explain the center control of robot Then unit control explanation robot security detects towards exhibition position along the exhibition position clockwise that starts close to exhibition position;
S103: explanation robot completes detection of obstacles during carrying out exhibition position detection, acquires exhibition by camera Position information picture, calls OCR to identify the semantic information of exhibition position, to generate the indoor exhibition position ring that explanation robot is responsible for explanation Border semanteme map.
Preferably, in step s3, one exhibition position of the every arrival of robot is explained, carries out positioning school using exhibition position information Just, it or when robot is artificially moved to some position, needs to relocate, specific step is as follows for positioning:
S301: explanation robot rotates a circle in current location, if discovery exhibition position, explains robot and lean on to the exhibition position Closely, if not finding exhibition position, robot random movement a distance is explained until discovery exhibition position;
S302: explanation robot identifies the exhibition position information by OCR identification module;
S303: explanation robot is according to exhibition position environment semanteme map in the exhibition position information matches room, so that it is fixed to complete itself Position.
The method have the benefit that: (1) when a robot enter a new environment, it needs to be familiar with first Environment, just as servant preferably services.
(2) present invention enhances the intelligence of robot using image procossing and text recognition technique.Robot is pressed first The visit of tourist requires before taking them to corresponding showpiece, then presents briefly and succinctly two levels by slightly making peace and is explained, helps not The tourist for introducing text and wanting wholwe-hearted ornamental showpiece of easy-to-read showpiece understands showpiece, diverts one's attention without tourist, reaches a kind of The good visit effect that audiovisual combines.
(3) more more accurate than the tour guide APP at this stage based on GPS under environment indoors of the invention;Than automatic briefing earphone It is relatively sanitary, convenient and lively;Robot can inquire whether tourist wants a step to understand the detailed of the showpiece after slightly saying showpiece It introduces, is presented briefly and succinctly if necessary, this selectable explanation mode meets the demand of different tourists, more humanized; The human resources for saving tour guide industry to a certain extent, reduce human cost.
(4) when robot is artificially moved, it can identify that exhibition position information is positioned by OCR;Robot is every when explaining An exhibition position is reached, position correction is carried out using exhibition position information, the accumulated error of robot ambulation process generation can be reduced.
Detailed description of the invention
Fig. 1 is that one embodiment of the present of invention explanation machine manually makees flow diagram.
Fig. 2 is the process signal that one embodiment of the present of invention explanation robot terminates progress self poisoning after this tour guide Figure.
Fig. 3 is the actual environment figure that one embodiment of the present of invention explains robot.
Fig. 4 be one embodiment of the present of invention explain machine life at semantic map.
Fig. 5 is the structural schematic diagram of one embodiment of the present of invention.
Specific embodiment
Below with reference to attached drawing 1-5 of the invention, technical solution in the embodiment of the present invention is clearly and completely retouched It states, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the present invention In embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment 1:
As shown in figure 5, a kind of indoor explanation robot based on OCR, including ultrasonic sensor, infrared sensor, incline Angle transducer, OCR identification module, audio output module, Audio Input Modules, mobile execution module, data memory module and in Control unit is entreated, the central control unit is known with the ultrasonic sensor, infrared sensor, obliquity sensor, OCR respectively Other module, audio output module, Audio Input Modules, mobile execution module are connected with data memory module;The ultrasonic wave passes Sensor, infrared sensor and obliquity sensor are used cooperatively, for explaining the distance between robot acquisition and exhibition position and establishing Semantic map;The OCR identification module text information on exhibition position for identification;The Audio Input Modules are for explaining machine People receives and identifies audio-frequency information, and the audio output module answers the explanation audio of exhibition position to believe for explaining robot output phase Breath, the mobile execution module complete corresponding movement for explaining robot, and the data memory module is for storing explanation The exhibition position content information and explanation robot information collected explained required for robot;The central control unit is used for The work of entire control explanation robot.
Preferred in this embodiment, the OCR identification module uses Tesseract-OCR engine.Tesseract-OCR Engine is currently one of higher identification engine of accuracy rate in the industry.Utilize the sample of jTessBoxEditor tool training oneself Library, so that recognition accuracy is higher on thinking field to be applied.
Preferred in this embodiment, the central control unit uses the SCM system of ARM core processor.And it transports The insertion scheme of row (SuSE) Linux OS.Its kernel dominant frequency can arrive 1.0GHZ, have 32/64 inside bus structure, 32/ The operational capability of 200,000,000 instructions per second may be implemented in 2 grades of cache of data/commands 1 grade of cache, 512KB of 32KB.Monolithic Machine core board has 512MDDR2 memory and NandFLASH memory, is powered using 5V DC power supply.Collect on single-chip microcomputer extension plate At MFC multimedia hardware encoder, support MPEG-1/2/4, H.263, the H.264 encoding and decoding of equal format videos, support simulation and Digital TV output, to handle video output data, is additionally integrated with JPEG hardware image encoder, maximum can encoding and decoding The picture of 8192*8192 resolution ratio, the image to capture to camera carry out hardware encoding.In order to connect camera and wireless Communication module also inherits USB chip and three USB interfaces on single-chip microcomputer extension plate.One of USB interface connection camera shooting Head, one of USB interface connect wireless communication module, one of USB interface is connected to system debug terminal interface. The liquid crystal display interface of 45pin is additionally provided on single-chip microcomputer extension plate, connector is inserted using 0.5mmFFC/FPC flat cable Seat.Single-chip microcomputer extension plate is provided with sound card chip, and sound card chip is connect to complete the sound of image recognition with audio output circuit Feedback.
It is preferred in this embodiment, it further include camera, the camera is connect with the central control unit.
It is preferred in this embodiment, it further include wireless communication module, the wireless communication module and the center control Unit connection.
Embodiment 2:
As shown in Figure 1, a kind of explanation method of the indoor explanation robot based on OCR, comprising the following steps:
S1: interpreter is generated by the data that ultrasonic sensor, infrared sensor, camera and obliquity sensor acquire Device people is responsible for the indoor exhibition position environment semanteme map of explanation, and stores into the data memory module of explanation robot;
S2: explanation robot inquires the visit mode of tourist by audio output module and is received by Audio Input Modules Tourist's instruction, if tourist's selecting sequence is visited, explanation robot leads tourist's sequence to visit, if tourist specifies exhibition position to visit, Explanation robot then plans that a paths lead tourist to specified exhibition position;
S3: explanation robot identifies current exhibition position information by OCR identification module, and is tourist by audio output module Slightly say the summary content of current exhibition position;Position correction is carried out according to exhibition position information simultaneously;S4: explanation robot passes through audio output Whether module queries tourist is it should be understood that the detailed content of current exhibition position and receive tourist's instruction by Audio Input Modules, if swimming Visitor is it should be understood that currently the detailed content of exhibition position, explanation robot then call the detailed of the current exhibition position stored in data memory module Thin content enters step S5 if tourist is not required to the detailed content it is to be understood that current exhibition position;
S5: explanation robot inquires whether tourist terminates to visit and connect by Audio Input Modules by audio output module Tourist's instruction is received, if tourist terminates to visit, explanation robot terminates this tour guide, if tourist does not terminate to visit, is back to Step S2.
Preferably, step S1 further includes sub-step:
S101: progress border detection first;Grating map is initialized as 0, starting setting is in explanation robot or so two Sonar and the start boundary detection of side, read corresponding sonar value;If left sound value and right sound value are respectively less than 0.35 meter, before showing There is barrier in side, and explanation robot turns left or turns right according to inward turning method algorithm, otherwise explains robot straight trip;Complete border detection Afterwards, then it is ready for exhibition position detection;
S102: after explaining robot completion border detection, beginning to approach exhibition position, when explanation robot is equal to apart from exhibition position Or its sonar will be emitted when less than 0.5m and read actual distance value, according to this distance value, explain the center control of robot Then unit control explanation robot security detects towards exhibition position along the exhibition position clockwise that starts close to exhibition position;
S103: explanation robot completes detection of obstacles during carrying out exhibition position detection, acquires exhibition by camera Position information picture, calls OCR to identify the semantic information of exhibition position, to generate the indoor exhibition position ring that explanation robot is responsible for explanation Border semanteme map.
Preferably, in step s3, one exhibition position of the every arrival of robot is explained, carries out positioning school using exhibition position information Just, it or when robot is artificially moved to some position, needs to relocate, specific step is as follows for positioning:
S301: explanation robot rotates a circle in current location, if discovery exhibition position, explains robot and lean on to the exhibition position Closely, if discovery exhibition position, then robot random movement a distance is explained until discovery exhibition position;
S302: explanation robot identifies the exhibition position information by OCR identification module;
S303: explanation robot is according to exhibition position environment semanteme map in the exhibition position information matches room, so that it is fixed to complete itself Position.
Wherein, firstly, explanation robot carries out border detection by two sonars at left and right sides of it.Then, interpreter Device people is brought in the front of barrier by red ball tracker.When explaining robot and tracking red ball by camera, it can be with Obtain the distance and angle of ball.The algorithm includes three parts: border detection, detection of obstacles and given semantic information.Side Boundary's detection and detection of obstacles ambient boundary and all barriers for identification, and semantic information is used to obtain the name of barrier Claim.
First part: border detection.
When explanation robot enters circumstances not known, border detection will be first carried out in it.Software based on Nao robot is flat Platform writes program code with Python.Pseudocode is as follows:
Step 1: grid map is initialized as 0.
[[0 indicates in range (row)] indicates b range (column) to map=
Step 2: starting sonar and start boundary detection.
SonarProxy=ALProxy (" ALSonar ", IP, PORT)
MemoryProxy=ALProxy (" ALMemory ", IP, PORT)
Step 3: reading the value of sonar.
Lvalue=memoryProxy.getData (" Device/SubDeviceList/US/Left/Sensor/ Value”)
Rvalue=memoryProxy.getData (" Device/SubDeviceList/US/Right/Sensor/ Value”)
Step 4: if lvalue and r value are both less than 0.35 meter, robot will turn left or turn right according to inward turning method algorithm, turn It is curved to add 1, otherwise keep straight on.
If (lvalue < 0.35, r value < 0.35):
Self.motionProxy.walkTo (0,0 ,-math.pi/2.0) # turns left
self.motionProxy.walkTo(0.08,0,0)
CountTurn=countTurn+1
Other:
Self.motionProxy.walkTo (0.08,0,0) # straight trip
Step 5: if not completing border detection, going to step 3, otherwise go to step 6 and be ready for barrier Detection.
If (map [x] [y]==1 and countTurn >=3):
BorderFinish=is true
Step 6: terminating.
Second part and Part III: detection of obstacles and given semantic information.
When explaining robot completion border detection, red club directs it towards the front of barrier.It is connect in explanation robot Before nearly obstacle, explanation robot continues to follow red ball.When the close enough barrier of explanation robot, (threshold value is arranged for we For 0.5m), red ball will be rapidly removed.If explanation robot can not detect red ball in 30 seconds, it will emit its sonar And sonar value is read, according to this value, explaining robot calculating should walk how much walk the close enough barrier of ability.When it is arrived When up to safe distance (0.25 meter), it towards barrier and will start detection of obstacles clockwise.When explanation robot completes barrier When hindering the border detection of object, it will identify exhibition position information to the semantic information of acquired disturbance object by OCR.In order to reduce noise Influence, if the absolute value of arnotto ball angle is less than 30 degree, we allow explanation robot continue to keep straight on.On the other hand, work as angle When greater than 30 degree, explanation robot will turn left, and otherwise when angle, which is less than -30, to be spent, explanation robot will turn right.Pseudocode It is as follows:
Step 1: starting red ball and track and follow red ball and the trackerProxy=ALProxy that freely walks The red ball detection of (" ALRedBallTracker ", IP, PORT) #launch
trackerProxy.startTracker()
If (abs (redBallAngle) < math.radians (30)):
MotionProxy.walkTo (0.08,0,0) # straight trip
Elif (redBallAngle >=math.radians (30)):
MotionProxy.walkTo (0,0, math.pi/2.0) # turns left
Head is switched to 0 radian by motionProxy.angleInterpolation (" HeadYaw ", 0,1.0, True) #
motionProxy.walkTo(0.08,0,0)
Elif (redBallAngle≤- math.radians (30)):
MotionProxy.walkTo (0,0 ,-math.pi/2.0) # turns right
MotionProxy.angleInterpolation (" HeadYaw ", 0,1.0, True)
motionProxy.walkTo(0.08,0,0)
Step 2: if arnotto ball is lost 30 seconds, explanation robot will emit its sonar.
SonarProxy=ALProxy (" ALSonar ", IP, PORT)
MemoryProxy=ALProxy (" ALMemory ", IP, PORT)
sonarProxy.subscribe('Test_sonar')
Step 3: reading the value of sonar
Lvalue=memoryProxy.getData (" Device/SubDeviceList/US/Left/Sensor/ Value”)
Rvalue=memoryProxy.getData (" Device/SubDeviceList/US/Right/Sensor/ Value”)
Step 4: if sonar value less than 0.5 meter, illustrates that there is barrier in front, robot calculates how many step that walk first It reaches at a distance from barrier less than 0.25 meter, is then transferred to detection of obstacles state;Otherwise, it indicates that map structuring terminates, turns To step 910.
If (lvalue < 0.5 or r value < 0.5):
The step-length of number of steps=INT (minimum value (lvalue -0.25, r value -0.25)/0.08) # robot is 0.08m
MotionProxy.walkTo (number of steps * 0.08,0.0,0.0)
CountTurn=0
Obstacle=true
ObstacleFinish=is false
Step 5: if lvalue and both less than 0.25 meter of small value, indicating explanation robot just towards barrier, at this time with taking the photograph As head one frame information of acquisition, then OCR is called to be identified, if there is representing the text information of exhibition position, be then stored into variable Otherwise result ignores this identification as the semantic information of current barrier, i.e. exhibition position information.If sonar value is not less than 0.25 meter, indicate that explanation robot is located at the corner of barrier, explanation robot will turn right after two steps of walking.
If (lvalue > 0.25):
For the j (2) in range:
self.motionProxy.walkTo(0,0.1,0.0)
Self.motionProxy.walkTo (0,0.0 ,-math.pi/2.0) # turns right
For the k (2) in range:
self.motionProxy.walkTo(0,0.1,0.0)
Other:
Self.motionProxy.walkTo (0,0.1,0) # holding is walked to the left
Step 6: step 5 is repeated, until current detection of obstacles is completed.
Step 7: the word result that explanation robot identifies step 5 inserts concordance list t, and index variables Scount Add 1.
Scount+=1
T.no=Scount
The exhibition position t.name=result# title
Step 8: going to step 1.
Step 9: terminating.
It explains robot and all detection of obstacles is completed according to above-mentioned algorithm.If explanation robot can not examine in 30 seconds Red ball is measured, while its sonar can not detect that obstacle, explanation robot can say that " time-out terminates!".
In the description of the present invention, it is to be understood that, term " counterclockwise ", " clockwise " " longitudinal direction ", " transverse direction ", The orientation of the instructions such as "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outside" or Positional relationship is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of the description present invention, rather than is indicated or dark Show that signified device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore should not be understood as pair Limitation of the invention.

Claims (8)

1. a kind of indoor explanation robot based on OCR, which is characterized in that including ultrasonic sensor, infrared sensor, inclination angle Sensor, OCR identification module, audio output module, Audio Input Modules, mobile execution module, data memory module and center Control unit, the central control unit are identified with the ultrasonic sensor, infrared sensor, obliquity sensor, OCR respectively Module, audio output module, Audio Input Modules, mobile execution module are connected with data memory module;The supersonic sensing Device, infrared sensor and obliquity sensor are used cooperatively, for explaining the distance between robot acquisition and exhibition position and establishing language Free burial ground for the destitute figure;The OCR identification module text information on exhibition position for identification;The Audio Input Modules are for explaining robot Audio-frequency information is received and identified, the audio output module is used to explain the explanation audio-frequency information that robot output phase answers exhibition position, The mobile execution module completes corresponding movement for explaining robot, and the data memory module is for storing explanation machine The exhibition position content information and explanation robot information collected explained required for people;The central control unit is for entire The work of control explanation robot.
2. a kind of indoor explanation robot based on OCR according to claim 1, which is characterized in that the OCR identifies mould Block uses Tesseract-OCR engine.
3. a kind of indoor explanation robot based on OCR according to claim 1, which is characterized in that the center control Unit uses the SCM system of ARM core processor.
4. a kind of indoor explanation robot based on OCR according to claim 1, which is characterized in that it further include camera, The camera is connect with the central control unit.
5. a kind of indoor explanation robot based on OCR according to claim 1, which is characterized in that further include channel radio Believe module, the wireless communication module is connect with the central control unit.
6. a kind of explanation method of indoor explanation robot based on OCR according to claim 1, which is characterized in that packet Include following steps:
S1: explanation robot is generated by the data that ultrasonic sensor, infrared sensor, camera and obliquity sensor acquire It is responsible for the indoor exhibition position environment semanteme map of explanation, and stores into the data memory module of explanation robot;
S2: explanation robot inquires the visit mode of tourist by audio output module and receives tourist by Audio Input Modules Instruction, if tourist's selecting sequence is visited, explanation robot leads tourist's sequence to visit, if tourist specifies exhibition position to visit, explanation Robot then plans that a paths lead tourist to specified exhibition position;
S3: explanation robot identifies current exhibition position information by OCR identification module, and is that tourist slightly says by audio output module The summary content of current exhibition position;Position correction is carried out according to exhibition position information simultaneously;
S4: explanation robot by audio output module inquire tourist whether it should be understood that currently exhibition position detailed content and pass through Audio Input Modules receive tourist's instruction, if tourist is it should be understood that currently the detailed content of exhibition position, explanation robot call number According to the detailed content of the current exhibition position stored in memory module, if tourist is not required to the detailed content it is to be understood that current exhibition position, into Enter step S5;
S5: explanation robot inquires whether tourist terminates to visit and receive by Audio Input Modules to swim by audio output module Visitor's instruction, if tourist terminates to visit, explanation robot terminates this tour guide, if tourist does not terminate to visit, is back to step S2。
7. a kind of explanation method of indoor explanation robot based on OCR according to claim 6, which is characterized in that step Rapid S1 further includes sub-step:
S101: progress border detection first;Grating map is initialized as 0, starting is arranged at left and right sides of explanation robot Sonar and start boundary detection, read corresponding sonar value;If left sound value and right sound value are respectively less than 0.35 meter, show that front has Barrier, explanation robot turn left or turn right according to inward turning method algorithm, otherwise explain robot straight trip;After completing border detection, Then it is ready for exhibition position detection;
S102: after explaining robot and completing border detection, exhibition position is begun to approach, when explanation robot is equal to or small apart from exhibition position Its sonar will be emitted when 0.5m and read actual distance value, according to this distance value, explain the central control unit of robot Then control explanation robot security detects towards exhibition position along the exhibition position clockwise that starts close to exhibition position;
S103: explanation robot completes detection of obstacles during carrying out exhibition position detection, acquires exhibition position letter by camera Picture is ceased, OCR is called to identify the semantic information of exhibition position, to generate the indoor exhibition position environment language that explanation robot is responsible for explanation Free burial ground for the destitute figure.
8. a kind of explanation method of indoor explanation robot based on OCR according to claim 6, which is characterized in that In step S3, one exhibition position of the every arrival of robot is explained, carries out positioning correcting using exhibition position information, or when robot is by people It when being moved to some position for ground, needs to relocate, specific step is as follows for positioning:
S301: explanation robot rotates a circle in current location, if discovery exhibition position, it is close to the exhibition position to explain robot, if Exhibition position is not found, then explains robot random movement a distance until discovery exhibition position;
S302: explanation robot identifies the exhibition position information by OCR identification module;
S303: explanation robot is according to exhibition position environment semanteme map in the exhibition position information matches room, to complete self poisoning.
CN201910485389.6A 2019-06-05 2019-06-05 A kind of indoor explanation robot and its explanation method based on OCR Pending CN110154053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910485389.6A CN110154053A (en) 2019-06-05 2019-06-05 A kind of indoor explanation robot and its explanation method based on OCR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910485389.6A CN110154053A (en) 2019-06-05 2019-06-05 A kind of indoor explanation robot and its explanation method based on OCR

Publications (1)

Publication Number Publication Date
CN110154053A true CN110154053A (en) 2019-08-23

Family

ID=67627774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910485389.6A Pending CN110154053A (en) 2019-06-05 2019-06-05 A kind of indoor explanation robot and its explanation method based on OCR

Country Status (1)

Country Link
CN (1) CN110154053A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111645089A (en) * 2020-06-17 2020-09-11 重庆大学 Museum tour guide robot and robot system
CN111881825A (en) * 2020-07-28 2020-11-03 深圳市点通数据有限公司 Interactive text recognition method and system based on multi-perception data
CN112652073A (en) * 2020-12-31 2021-04-13 中国电子科技集团公司信息科学研究院 Autonomous navigation method and system based on cloud network end robot
CN113296495A (en) * 2020-02-19 2021-08-24 苏州宝时得电动工具有限公司 Path forming method and device for self-moving equipment and automatic working system
CN113370229A (en) * 2021-06-08 2021-09-10 山东新一代信息产业技术研究院有限公司 Exhibition hall intelligent explanation robot and implementation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106989747A (en) * 2017-03-29 2017-07-28 无锡市中安捷联科技有限公司 A kind of autonomous navigation system based on indoor plane figure
CN107553505A (en) * 2017-10-13 2018-01-09 刘杜 Autonomous introduction system platform robot and explanation method
KR20180087798A (en) * 2017-01-25 2018-08-02 엘지전자 주식회사 Moving robot and control method therof
CN109129507A (en) * 2018-09-10 2019-01-04 北京联合大学 A kind of medium intelligent introduction robot and explanation method and system
CN208629445U (en) * 2017-10-13 2019-03-22 刘杜 Autonomous introduction system platform robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180087798A (en) * 2017-01-25 2018-08-02 엘지전자 주식회사 Moving robot and control method therof
CN106989747A (en) * 2017-03-29 2017-07-28 无锡市中安捷联科技有限公司 A kind of autonomous navigation system based on indoor plane figure
CN107553505A (en) * 2017-10-13 2018-01-09 刘杜 Autonomous introduction system platform robot and explanation method
CN208629445U (en) * 2017-10-13 2019-03-22 刘杜 Autonomous introduction system platform robot
CN109129507A (en) * 2018-09-10 2019-01-04 北京联合大学 A kind of medium intelligent introduction robot and explanation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邢琦玮: "中国优秀博硕士学位论文全文数据库(硕士)信息科技辑", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113296495A (en) * 2020-02-19 2021-08-24 苏州宝时得电动工具有限公司 Path forming method and device for self-moving equipment and automatic working system
CN113296495B (en) * 2020-02-19 2023-10-20 苏州宝时得电动工具有限公司 Path forming method and device of self-mobile equipment and automatic working system
CN111645089A (en) * 2020-06-17 2020-09-11 重庆大学 Museum tour guide robot and robot system
CN111881825A (en) * 2020-07-28 2020-11-03 深圳市点通数据有限公司 Interactive text recognition method and system based on multi-perception data
CN111881825B (en) * 2020-07-28 2023-10-17 深圳市点通数据有限公司 Interactive text recognition method and system based on multi-perception data
CN112652073A (en) * 2020-12-31 2021-04-13 中国电子科技集团公司信息科学研究院 Autonomous navigation method and system based on cloud network end robot
CN113370229A (en) * 2021-06-08 2021-09-10 山东新一代信息产业技术研究院有限公司 Exhibition hall intelligent explanation robot and implementation method

Similar Documents

Publication Publication Date Title
CN110154053A (en) A kind of indoor explanation robot and its explanation method based on OCR
CN107179086B (en) Drawing method, device and system based on laser radar
Foxlin et al. VIS-Tracker: A Wearable Vision-Inertial Self-Tracker.
US6766245B2 (en) Landmark-based location of users
CN110378965A (en) Determine the method, apparatus, equipment and storage medium of coordinate system conversion parameter
CN106647745B (en) Diagnosis guiding robot autonomous navigation system and method based on Bluetooth positioning
WO2016131279A1 (en) Movement track recording method and user equipment
CN109724603A (en) A kind of Indoor Robot air navigation aid based on environmental characteristic detection
WO2021077941A1 (en) Method and device for robot positioning, smart robot, and storage medium
CN106292657A (en) Mobile robot and patrol path setting method thereof
CN105974456B (en) A kind of autonomous underwater vehicle combined navigation system
CN105074691A (en) Context aware localization, mapping, and tracking
CN103892995A (en) Electronic seeing-eye dog robot
CN108955682A (en) Mobile phone indoor positioning air navigation aid
CN103312899A (en) Smart phone with blind guide function
CN109916408A (en) Robot indoor positioning and air navigation aid, device, equipment and storage medium
Khan et al. Recent advances in vision-based indoor navigation: A systematic literature review
CN109059929A (en) Air navigation aid, device, wearable device and storage medium
WO2022161386A1 (en) Pose determination method and related device
Kamalam et al. Augmented reality-centered position navigation for wearable devices with machine learning techniques
CN111815844A (en) Intelligent machine tour guide and control method, control device and storage medium thereof
CN111477131A (en) Intelligent exhibition hall voice broadcasting device and broadcasting method thereof
WO2022000757A1 (en) Ar-based robot internet of things interaction method and apparatus, and medium
CN114153310A (en) Robot guest greeting method, device, equipment and medium
CN112017247A (en) Method for realizing unmanned vehicle vision by using KINECT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190823