CN110457884A - Target follower method, device, robot and read/write memory medium - Google Patents
Target follower method, device, robot and read/write memory medium Download PDFInfo
- Publication number
- CN110457884A CN110457884A CN201910724925.3A CN201910724925A CN110457884A CN 110457884 A CN110457884 A CN 110457884A CN 201910724925 A CN201910724925 A CN 201910724925A CN 110457884 A CN110457884 A CN 110457884A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- authentication
- active user
- follow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
Abstract
The application provides a kind of target follower method, device, robot and read/write memory medium, is related to field of image processing.Solicited message is followed this method comprises: receiving;Solicited message is followed described in response, and authentication is carried out to active user;When the active user passes through authentication, using the active user as following target;Target is followed to be moved described in following.The target follower method carries out authentication to target before carrying out following movement, follows accuracy and safety to improve.
Description
Technical field
This application involves technical field of image processing, in particular to a kind of target follower method, device, robot
And read/write memory medium.
Background technique
With the fast development of computer vision technique, based on real-time acquired image sequence, at image
The method of reason extracts the moving target in image, identifies and follows, and obtains the information such as position and the parameter of moving target,
And understanding and analysis to moving target are further realized, such vision follows technology gradually in military, industry and agricultural
It is widely used, becomes the focus of people's research in production.Therefore, the design Yu realization of vision system for tracking are computers
One of the important subject in field.
But existing vision follow robot usually receive follow instruction after immediately begin to send this follow finger
The target of order carries out following movement, which may be that transient error triggers this and follows instruction, does not need robot and follows, separately
On the one hand, if robot can follow any personnel of instruction to follow triggering, personnel identity and permission are not being known
In the case where, there is also biggish safety problems for robot.
Summary of the invention
In view of this, the embodiment of the present application is designed to provide a kind of target follower method, device, robot and readable
Storage medium is taken, follows accuracy and the poor problem of safety to improve robot existing in the prior art.
The embodiment of the present application provides a kind of target follower method, which comprises reception follows solicited message;Response
It is described to follow solicited message, authentication is carried out to active user;When the active user passes through authentication, work as by described in
Preceding user is as following target;Target is followed to be moved described in following.The target follower method is before carrying out following movement
Authentication is carried out to target, follows accuracy and safety to improve.
In above-mentioned implementation, follow it is mobile before authentication carried out to target, authentication pass through after
It carries out following movement, the object for avoiding the object for following mistake or following lack of competence follows accuracy and safety to improve
Property.
Optionally, described to follow solicited message for acoustic information, the reception follows solicited message, comprising: passes through Mike
The acoustic information that active user described in wind array received issues;After the reception follows solicited message, the method is also wrapped
Include: the signal strength of the acoustic information received based on each microphone in the microphone array to the position of sound source into
Row positioning;It is moved to the position of the sound source, so that the active user inputs authentication information.
In above-mentioned implementation, using acoustic information as triggering information, directly sending is followed by acoustic information and is asked
Ask the target of information to be positioned, can it is more convenient, quickly by close-target so as to carry out it is subsequent follow operation, to improve
The efficiency and simplicity that target follows.
Optionally, it is described to the active user carry out authentication before, the method also includes: described in typing when
The authentication standard information of preceding user;It is described that authentication is carried out to the active user, comprising: the active user is defeated
The authentication information entered is matched with the authentication standard information;It is tested in the authentication information and the identity
When demonstrate,proving standard information successful match, determine that the active user passes through authentication.
In above-mentioned implementation, believed by the authentication of pre-set authentication standard information and user's input
The matching result of breath realizes the authentication of user, guarantees the legitimacy and safety that follow target.
Optionally, described follow described follows target to be moved, comprising: acquires the ambient image of present frame;Determine institute
State the image for the specified type that whether there is in the ambient image of present frame;There are the image of the specified type, and it is described
The image of specified type is described when following the image of target, follows and described target is followed to be moved.
In above-mentioned implementation, image is handled frame by frame, identifies whether the image of specified type is to follow target,
Make to follow reaction rapider, enhances and follow instantaneity.
Optionally, described there are the image of the specified type, and the image of the specified type is described to follow mesh
When target image, follow and described target followed to be moved, comprising: when there are the image of the specified type, determine described in
The grey level distribution situation of each position in the image of specified type;The specified type is obtained based on the grey level distribution situation
Image carrying space location information current gray level weighted histogram;In the current gray level weighted histogram and intensity-weighted
When the Histogram Matching success of histogram template, determine that the image of the specified type is the image for following target;It determines
It is described that target image and the target is currently followed to follow the relative distance and relative direction of robot;Based on the relative distance
Path is followed with relative direction planning;Follow path that the target is followed to be moved based on described.
In above-mentioned implementation, pass through the gray scale of the carrying space location information of the image and template image of specified type
Weighted histogram follow the identification of target, and not needing progress physics or signal connection can realize and follow function in real time,
The use for reducing corollary equipment improves the simplicity followed.
Optionally, there are the image of the specified type, and the image of the specified type is described to follow target
When image, follow it is described follow target carry out it is mobile before, the method also includes: pass through authentication in the active user
When, obtain the template image of the active user;Determine the grey level distribution situation of each position in the template image;Based on institute
The grey level distribution situation for stating each position in template image obtains the gray scale of the carrying space location information of the template image
Weighted histogram template.
In above-mentioned implementation, template image is obtained after user completes authentication, and pre- based on the template image
The intensity-weighted histogram template for setting the carrying space location information of template image, further improve authentication safety and
Accuracy.
Optionally, described follow described follows target to be moved, comprising: the time for following target described in the loss is more than
After preset time, and it is described when target being followed to occur again, authentication is carried out again to the active user;Described current
When user passes through authentication, using the active user as following target;Continue to follow and described target is followed to be moved.
In above-mentioned implementation, need to carry out identity again to it before re-starting tracking after tracking target is lost
Verifying further improves tracking safety and accuracy.
The embodiment of the present application also provides a kind of target following devices, and described device includes: request receiving module, for connecing
Receipts follow solicited message;Authentication module, for respond it is described follow solicited message, authentication is carried out to active user;Target
Determining module, for when the active user passes through authentication, using the active user as following target;Follow mould
Block described follows target to be moved for following.
In above-mentioned implementation, follow it is mobile before authentication carried out to target, authentication pass through after
It carries out following movement, the object for avoiding the object for following mistake or following lack of competence follows accuracy and safety to improve
Property.
Optionally, the authentication module is specifically used for: receiving the sound that the active user issues by microphone array
Information;After the reception follows solicited message, the method also includes: based on each microphone in the microphone array
The signal strength of the acoustic information received positions the position of sound source;It is moved to the position of the sound source, so that
The active user inputs authentication information.
In above-mentioned implementation, using acoustic information as triggering information, directly sending is followed by acoustic information and is asked
Ask the target of information to be positioned, can it is more convenient, quickly by close-target so as to carry out it is subsequent follow operation, to improve
The efficiency and simplicity that target follows.
Optionally, the authentication module includes: typing unit, and the authentication standard for active user described in typing is believed
Breath;Matching unit, authentication information and authentication standard information progress for inputting the active user
Match;Authentication unit, for working as described in determination in the authentication information and the authentication standard information successful match
Preceding user passes through authentication.
In above-mentioned implementation, believed by the authentication of pre-set authentication standard information and user's input
The matching result of breath realizes the authentication of user, guarantees the legitimacy and safety that follow target.
Optionally, described to follow module include: acquisition unit, for acquiring the ambient image of present frame;Type determines single
Member, the image for the specified type that whether there is in the ambient image for determining the present frame;Unit is followed, for existing
The image of the specified type, and the image of the specified type is described when following the image of target follows and described follows mesh
Mark is moved.
In above-mentioned implementation, image is handled frame by frame, identifies whether the image of specified type is to follow target,
Make to follow reaction rapider, enhances and follow instantaneity.
Optionally, described to follow module further include image identification unit, is used for: there are the images of the specified type
When, determine the grey level distribution situation of each position in the image of the specified type;It is obtained based on the grey level distribution situation
The current gray level weighted histogram of the carrying space location information of the image of the specified type;Histogram is weighted in the current gray level
When figure and the success of the Histogram Matching of intensity-weighted histogram template, determine that the image of the specified type follows target to be described
Image;It determines and described currently follows target image and the target follows the relative distance and relative direction of robot;It is based on
The relative distance and relative direction planning follow path;Follow path that the target is followed to be moved based on described.
In above-mentioned implementation, pass through the gray scale of the carrying space location information of the image and template image of specified type
Weighted histogram follow the identification of target, and not needing progress physics or signal connection can realize and follow function in real time,
The use for reducing corollary equipment improves the simplicity followed.
Optionally, described image recognition unit is also used to: when the active user passes through authentication, being worked as described in acquisition
The template image of preceding user;Determine the grey level distribution situation of each position in the template image;Based in the template image
The grey level distribution situation of each position obtains the intensity-weighted histogram artwork of the carrying space location information of the template image
Plate.
In above-mentioned implementation, template image is obtained after user completes authentication, and pre- based on the template image
The intensity-weighted histogram template for setting the carrying space location information of template image, further improve authentication safety and
Accuracy.
Optionally, described image recognition unit is also used to: after following described in the loss time of target to be more than preset time,
And it is described when target being followed to occur again, authentication is carried out again to the active user;Pass through body in the active user
When part verifying, using the active user as following target;Continue to follow and described target is followed to be moved.
In above-mentioned implementation, need to carry out identity again to it before re-starting tracking after tracking target is lost
Verifying further improves tracking safety and accuracy.
Follow robot the embodiment of the present application also provides a kind of target, the target follow robot include memory and
Processor is stored with program instruction in the memory and executes above-mentioned when the processor reads and runs described program instruction
Step in any implementation.
The embodiment of the present application also provides a kind of read/write memory medium, calculating is stored in the read/write memory medium
Machine program instruction when the computer program instructions are read and run by a processor, executes in any of the above-described implementation
Step.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application will make below to required in the embodiment of the present application
Attached drawing is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore should not be seen
Work is the restriction to range, for those of ordinary skill in the art, without creative efforts, can be with
Other relevant attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of flow diagram of target follower method provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram for follow by acoustic information request provided by the embodiments of the present application;
A kind of position Fig. 3 lose provided by the embodiments of the present application follows the flow diagram verified again after target;
Fig. 4 is the flow diagram that a kind of vision provided in this embodiment follows step;
Fig. 5 is the flow diagram that a kind of target provided by the embodiments of the present application follows identification step;
Fig. 6 is a kind of structural schematic diagram of target following device provided by the embodiments of the present application.
Icon: 20- target following device;21- request receiving module;22- authentication module;23- target determination module;24-
Follow module.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application is described.
Through the applicant the study found that more and more robots, which have, follows function automatically, target can be followed mobile,
Be widely used in the scenes such as dining room, household, service trade, but it is existing follow robot usually receive follow triggering believe
Just start follow the mode after breath, it is such to follow no matter it is that this follows triggering information for whose transmission, and under the application scenarios such as dining room
Robot just will appear the situation for following object mistake, such as be the robot for servicing A table originally, receive B table guest hair
Out follow triggering information after B table guest is followed, cause service chaotic.On the other hand, application follows in public
When robot, if can be carried out following to any personnel, it is likely that occur following robot to lose or unvested follow people
Member carries out the safety issues such as faulty operation to machine.
To solve the above-mentioned problems, this application provides a kind of target follower method, which can be applied
In on robot, vehicle or any other electronic equipment, the present embodiment is by taking target follows robot as an example.Referring to FIG. 1, Fig. 1 is
A kind of flow diagram of target follower method provided by the embodiments of the present application, the specific steps of the target follower method can be as
Under:
Step S12: reception follows solicited message.
It should be understood that can be target when reception follows solicited message and robot followed to pass through message queue etc.
Mode actively follow the acquisition of solicited message, can also be that user directly transmits point-to-pointly and follows solicited message to target
Follow robot.
Optionally, follow solicited message can be follows the correspondence key being arranged in robot to trigger by target, user
When needing starting to follow function, corresponding key is pressed, target follows robot then to receive and follows solicited message.Follow request
Information can also be that user is sent to target by communication apparatus such as mobile phone, remote controlers and follows robot, so that target is with random
Device people follows solicited message to respond this.
In addition to aforesaid way, solicited message is followed to can also be acoustic information or gesture information etc., user issues specific
Voice combines or makes specific body action, and target follows robot to obtain voice combination by microphone or pass through camera shooting
Head obtains the body action, identifies that voice combination or body action follow solicited message for preset.
Optionally, follow solicited message be acoustic information when, can preset triggering voice group be combined into " please follow ", " open
It is dynamic to follow ", " following ", the voices combination such as " being walked with me ";It, can preset triggering body when following solicited message is body action
It acts to wave, one hand is turn-taked.
By taking to follow solicited message be acoustic information as an example, referring to FIG. 2, Fig. 2 is provided by the embodiments of the present application a kind of logical
The flow diagram that acoustic information follow request is crossed, which specifically can be such that
Step S12.1: the acoustic information that active user issues is received by microphone array.
Microphone array includes the centrosymmetric linear array of several columns, and the extended line of all linear arrays intersects at same
Central point, optionally, the angle of every two linear arrays are equal, and same put down can be arranged in linear array as the case may be
Face or Different Plane increase the acquisition range of acoustic information as much as possible, more comprehensively to acquire acoustic information.
Step S12.2: the signal strength of the acoustic information received based on microphone each in microphone array is to sound source
Position positioned.
Specifically, the signal strength of more each microphone each acoustic information collected chooses acquired acoustic information
Strongest a pair of of the Mike of signal strength where linear array, respectively by collected with radial each acoustic information and sound source
Frequency matched, with localization of sound source at a distance from microphone array, thus the position of localization of sound source.
Step S12.3: being moved to the position of sound source, so that active user inputs authentication information.
Optionally, in order not to colliding with user, better user experience is obtained, it can also be by infrared distance measurement, super
The modes such as sound ranging be moved to apart from user preset apart from when stop movement, the pre-determined distance can be 0.5 meter, 1 meter or
Other facilitate the distance of user's progress authentication information input.
It should be understood that before executing step S14 and carrying out authentication, it is also necessary to typing authentication standard letter
Breath, the authentication information that user inputs to compare with authentication standard information in authentication, to obtain standard
True authentication effect.
Step S14: response follows solicited message, carries out authentication to active user.
Specifically, authentication step corresponding with authentication standard information can be such that
Step S14.1: response follows solicited message, by the authentication information and authentication standard of active user's input
Information is matched.
Step S14.2: in authentication information and authentication standard information successful match, determine that active user passes through
Authentication.
Optionally, authentication form may include voiceprint, iris authentication, finger print identifying, password in the present embodiment
Certification, gesture authentication are authenticated, face authentication etc. by reading identity card or other certificate informations.
Step S16: when active user passes through authentication, using active user as following target.
Optionally, user of short duration may leave when use follows function and follow range, it is also possible to do not need after
Continue when being followed directly off range is followed, in order to more accurately be followed, referring to FIG. 3, the present embodiment can also wrap
Include following steps:
Step S16.1: losing after the time for following target is more than preset time, and when target being followed to occur again, right
Active user carries out authentication again.
Optionally, preset time can follow the specific works type of robot to be adjusted according to target, can be 2
Second, 10 seconds, 60 seconds or any other time span.
Step S16.2: when active user passes through authentication, using active user as target is followed, to continue to follow
This follows target to be moved.
Optionally, before continuing that this is followed to follow target mobile, it can also follow whether target transmission continues to follow to this
Confirmation message, further according to this follow target return information judge whether to continue to follow.
Step S18: follow this that target is followed to be moved.
It should be understood that following to be implemented without physical connection or the real-time of signal connection, vision is generallyd use
Tracking mode realization follows, and vision tracking can carry out target identification, the present embodiment by locking color, shape or other features
By taking color characteristic as an example.Referring to FIG. 4, Fig. 4 is the flow diagram that a kind of vision provided in this embodiment follows step, the view
Feel follows the specific steps of step can be such that
Step S18.1: the ambient image of present frame is acquired.
The present embodiment can be acquired by ambient image of one or more cameras to ambient enviroment, in order to acquire
The ambient image of wider scope, can be using wide-angle camera or multiple cameras of setting different angle.
Step S18.2: the image for the specified type that whether there is in the ambient image of present frame is determined.
It is determined it should be understood that the image of above-mentioned specified type can according to need the target followed, is needing to follow
Specified type is set by human body contour outline image when people, the target if desired followed is object, then sets the contour of object image
It is set to specified type.
Optionally, the identification of the image of specified type can be realized by modes such as deep learning, template matchings, this reality
Example is applied in deep learning mode as an example, the process for carrying out the identification of the image of specified type can be such that through preset human body
Detection model obtains the outline data information of human body to be identified from environment;Human testing frame is determined according to outline data information,
And the corresponding images to be recognized of human body to be identified is intercepted from ambient image using human testing frame;Pass through preset human body attribute
Identification model carries out feature extraction to images to be recognized, to obtain the attributive character letter of the human body to be identified in images to be recognized
Breath;Determine that human body to be identified is the image of specified type based on attributive character information.
Step S18.3: when there are the images of specified type, and the image of specified type is the image for following target, with
Target is followed to be moved with this.
Specifically, referring to FIG. 5, Fig. 5 is that a kind of target provided by the embodiments of the present application follows the process of identification step to show
It is intended to, step S18.3 specifically can be such that
Step S18.3A: when there are the image of specified type, the gray level of each position in the image of specified type is determined
Distribution situation.
Gray processing processing is carried out to the image of specified type first, obtains the gray scale of all positions in the image of specified type
Distribution situation.Wherein, gray processing is exactly the process for the color image containing brightness and color being changing into gray level image.Gray scale point
Cloth refers to the distribution situation of the gray value of gray level image, reflects the most basic statistical nature of image.
Step S18.3B: the current of the carrying space location information of the image of specified type is obtained based on grey level distribution situation
Intensity-weighted histogram.
Grey level histogram is the function about grey level distribution, is the statistics to grey level distribution in image.Intensity histogram
Figure is that all pixels in digital picture according to the size of gray value, are counted the frequency of its appearance.Grey level histogram is gray scale
The function of grade, it indicates the number of the pixel in image with certain gray level, reflects the frequency that certain gray scale occurs in image
Rate.Intensity-weighted histogram is then the grey level histogram assigned after weight to the gray level of each position.
Specifically, the image of specified type can be regarded as center is x0Rectangular area, it is assumed that each location of pixels
Elliptical radius is h, and the point of each location of pixels in rectangular area is { xi, i=1,2 ..., n, corresponding histogram color level
Index be identified asThen carrying space location information weighted histogram are as follows: assign weight, weight size to the point of each position
It is adjusted according to the distance of the central point y apart from candidate region with gaussian kernel function.
Step S18.3C: in current gray level weighted histogram and the success of the Histogram Matching of intensity-weighted histogram template
When, determine that the image of specified type is the image for following target.
Histogram Matching process is as follows: assuming that q is the known gray scale weighted histogram of the image of specified type in present frame
Template, p (y) indicate the candidate region centered on point y, and the target of Histogram Matching is exactly to find a candidate region, make interior
The histogram description in portion is similar to known template q.If the tracking result of previous frame is y0, then the initial histogram of present frame
Can calculate can obtain.The histogram of candidate region is regarded as a function using central point y as variable, as shown in formula (1):
Wherein,And above-mentioned previous item is to determine value, therefore can use
MeanShift solves the maximum value of latter, in the functional value of the functional value and intensity-weighted histogram template of formula (1)
Difference is assured that the image of specified type is the image for following target when being less than default error.
Step S18.3D: it determines and target image and target is currently followed to follow the relative distance and relative direction of robot.
It should be understood that the relative distance and direction can be according to the ratio for currently following target image and real space
Example, which calculates, to be obtained, and is also possible to obtain by distance measuring sensor.
Step S18.3E: path is followed based on relative distance and relative direction planning.
Above-mentioned path planning is that known target follows the initial position of robot, given target position, there are obstacles
Environment in plan that a collisionless, time (energy) consume optimal path.
Step S18.3F: based on following path that target is followed to be moved.
It optionally,, can also be by red in order to avoid being fixed with other or mobile object is collided when being moved
The sensor informations such as outer sensor or ultrasonic sensor carry out avoidance.
It should be understood that before executing above-mentioned steps S18.3, it is also necessary to carry out setting for intensity-weighted histogram template
Fixed, in order to improve the accuracy and safety of template acquisition, template setting procedure is arranged after authentication the present embodiment,
Specific steps may include: to obtain the template image of active user when active user passes through authentication;Determine template image
The grey level distribution situation of middle each position;Based on the grey level distribution situation of each position in template image, template image is obtained
The intensity-weighted histogram template of carrying space location information.
As an alternative embodiment, if following target that can also have other characteristics of SSTA persistence in addition to shape feature,
Such as the sound of certain frequency or intensity is continuously issued, it can also be realized and be followed according to other characteristics of SSTA persistence.
The embodiment of the present application also provides a kind of target following devices 20, referring to FIG. 6, Fig. 6 mentions for the embodiment of the present application
A kind of structural schematic diagram of the target following device supplied.
Target following device 20 includes:
Request receiving module 21 follows solicited message for receiving;
Authentication module 22, follows solicited message for responding, and carries out authentication to active user;
Target determination module 23, for when active user passes through authentication, using active user as following target;
Module 24 is followed, for following this that target is followed to be moved.
Wherein, authentication module 21 is specifically used for: receiving the acoustic information that active user issues by microphone array;It is connecing
After receipts follow solicited message, the method also includes: the acoustic information received based on microphone each in microphone array
Signal strength the position of sound source is positioned;It is moved to the position of sound source, so that active user inputs authentication information.
Optionally, authentication module 21 further include: typing unit, the authentication standard information for typing active user;
Matching unit, the authentication information for inputting active user are matched with authentication standard information;Authentication unit,
For determining that active user passes through authentication in authentication information and authentication standard information successful match.
Following module 24 includes: acquisition unit, for acquiring the ambient image of present frame;Type determining units, for true
The image for the specified type that whether there is in the ambient image of settled previous frame;Unit is followed, for there are the figures of specified type
Picture, and when the image of specified type is the image for following target, follow this that target is followed to be moved.
Further, following module 24 further includes image identification unit, is used for: when there are the image of specified type, really
Determine the grey level distribution situation of each position in the image of specified type;The image of specified type is obtained based on grey level distribution situation
Carrying space location information current gray level weighted histogram;In current gray level weighted histogram and intensity-weighted histogram template
Histogram Matching success when, determine that the image of specified type is the image for following target;Determine currently follow target image with
Target follows the relative distance and relative direction of robot;Path is followed based on relative distance and relative direction planning;Based on
Target is followed to be moved with path.
Image identification unit is also used to: when active user passes through authentication, obtaining the template image of active user;Really
The grey level distribution situation of each position in solid plate image;Based on the grey level distribution situation of each position in template image, obtain
The intensity-weighted histogram template of the carrying space location information of template image.
Optionally, image identification unit is also used to: being lost after the time for following target is more than preset time, and is being followed mesh
When mark occurs again, authentication is carried out again to active user;When active user passes through authentication, active user is made
To follow target;Continue to follow this that target is followed to be moved.
The embodiment of the present application also provides a kind of targets to follow robot, and it includes memory and place which, which follows robot,
Device is managed, program instruction is stored in memory, when processor reads and runs program instruction, executes target provided in this embodiment
Step in any one of follower method method.
It should be understood that the processor should have image processing function and sound processing function, to ambient image
It is handled with acoustic information, to realize following function.
Optionally, it further includes walking mechanism which, which follows robot, walking mechanism basis under the instruction of processor
Planning path control target follows robot to be moved.Wherein, which can be the machines such as motor and wheel, pedipulator
The combination of structure.
The embodiment of the present application also provides a kind of read/write memory medium, computer journey is stored in read/write memory medium
Sequence instruction, step when computer program instructions are read and run by a processor, in performance objective follower method.
In conclusion the embodiment of the present application provides a kind of target follower method, device, robot and storage can be read is situated between
Matter, which comprises reception follows solicited message;Response follows solicited message, carries out authentication to active user;Working as
When preceding user passes through authentication, using active user as following target;Follow this that target is followed to be moved.The target side of following
Method carries out authentication to target before carrying out following movement, follows accuracy and safety to improve.
In above-mentioned implementation, follow it is mobile before authentication carried out to target, authentication pass through after
It carries out following movement, the object for avoiding the object for following mistake or following lack of competence follows accuracy and safety to improve
Property.
In several embodiments provided herein, it should be understood that disclosed equipment can also pass through others
Mode is realized.The apparatus embodiments described above are merely exemplary, for example, the block diagram in attached drawing is shown according to this Shen
The architecture, function and operation in the cards of the equipment of multiple embodiments please.In this regard, each box in block diagram
Can represent a part of a module, section or code, a part of the module, section or code include one or
Multiple executable instructions for implementing the specified logical function.It should also be noted that in some implementations as replacement,
Function marked in the box can also occur in a different order than that indicated in the drawings.For example, two continuous boxes are real
It can be basically executed in parallel on border, they can also be executed in the opposite order sometimes, and this depends on the function involved.
It should be noted that the combination of each box and block diagram in block diagram, can function or movement as defined in executing it is dedicated
Hardware based system is realized, or can be realized using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the application can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer readable storage medium.Therefore the present embodiment additionally provides stores in a kind of read/write memory medium
There are computer program instructions, when the computer program instructions are read and run by a processor, executes block data storage side
Step in any one of method the method.Based on this understanding, the technical solution of the application is substantially in other words to existing
The part of part or the technical solution that technology contributes can be embodied in the form of software products, and the computer is soft
Part product is stored in a storage medium, including some instructions are used so that a computer equipment (can be individual calculus
Machine, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps.And it is aforementioned
Storage medium include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory
The various media that can store program code such as (RAM, RanDOm Access Memory), magnetic or disk.
The above description is only an example of the present application, the protection scope being not intended to limit this application, for ability
For the technical staff in domain, various changes and changes are possible in this application.Within the spirit and principles of this application, made
Any modification, equivalent substitution, improvement and etc. should be included within the scope of protection of this application.It should also be noted that similar label and
Letter indicates similar terms in following attached drawing, therefore, once it is defined in a certain Xiang Yi attached drawing, then in subsequent attached drawing
In do not need that it is further defined and explained.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain
Lid is within the scope of protection of this application.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence " including ... ", it is not excluded that including
There is also other identical elements in the process, method, article or equipment of the element.
Claims (10)
1. a kind of target follower method, which is characterized in that follow robot applied to target, comprising:
Reception follows solicited message;
Solicited message is followed described in response, and authentication is carried out to active user;
When the active user passes through authentication, using the active user as following target;
Target is followed to be moved described in following.
2. target follower method according to claim 1, which is characterized in that it is described to follow solicited message for acoustic information,
The reception follows solicited message, comprising:
The acoustic information that the active user issues is received by microphone array;
After the reception follows solicited message, the method also includes:
Position of the signal strength of the acoustic information received based on each microphone in the microphone array to sound source
It is positioned;
It is moved to the position of the sound source, so that the active user inputs authentication information.
3. target follower method according to claim 1, which is characterized in that carry out identity to the active user described
Before verifying, the method also includes:
The authentication standard information of active user described in typing;
It is described that authentication is carried out to the active user, comprising:
The authentication information that the active user inputs is matched with the authentication standard information;
In the authentication information and the authentication standard information successful match, determine that the active user passes through body
Part verifying.
4. target follower method according to claim 1, which is characterized in that described follow described follows target to be moved
It is dynamic, comprising:
Acquire the ambient image of present frame;
Determine the image for the specified type that whether there is in the ambient image of the present frame;
When there are the images of the specified type, and the image of the specified type follows the image of target for described in, follow
It is described that target is followed to be moved.
5. target follower method according to claim 4, which is characterized in that described there are the figures of the specified type
Picture, and the image of the specified type is described when following the image of target follows and described target is followed to be moved, comprising:
When there are the image of the specified type, the grey level distribution feelings of each position in the image of the specified type are determined
Condition;
The current gray level that the carrying space location information of the image of the specified type is obtained based on the grey level distribution situation is added
Weigh histogram;
In the current gray level weighted histogram and the success of the Histogram Matching of intensity-weighted histogram template, the finger is determined
The image for determining type is the image for following target;
It determines and described currently follows target image and the target follows the relative distance and relative direction of robot;
Path is followed based on the relative distance and relative direction planning;
Follow path that the target is followed to be moved based on described.
6. target follower method according to claim 5, which is characterized in that there are the images of the specified type, and
The image of the specified type is described when following the image of target, follow it is described follow target carry out it is mobile before, the side
Method further include:
When the active user passes through authentication, the template image of the active user is obtained;
Determine the grey level distribution situation of each position in the template image;
Based on the grey level distribution situation of each position in the template image, the carrying space location information of the template image is obtained
The intensity-weighted histogram template.
7. target follower method according to claim 1, which is characterized in that described follow described follows target to be moved
It is dynamic, comprising:
After following described in the loss time of target to be more than preset time, and it is described when target being followed to occur again, work as to described
Preceding user carries out authentication again;
When the active user passes through authentication, using the active user as following target;
Continue to follow and described target is followed to be moved.
8. a kind of target following device, which is characterized in that described device includes:
Request receiving module follows solicited message for receiving;
Authentication module, for respond it is described follow solicited message, authentication is carried out to active user;
Target determination module, for when the active user passes through authentication, using the active user as following target;
Module is followed, described target is followed to be moved for following.
9. a kind of target follows robot, which is characterized in that it includes memory and processor that the target, which follows robot, described
Program instruction is stored in memory, when the processor operation described program instructs, perform claim requires any one of 1-7 institute
State the step in method.
10. a kind of read/write memory medium, which is characterized in that be stored with computer program in the read/write memory medium and refer to
It enables, when the computer program instructions are run by a processor, perform claim requires the step in any one of 1-7 the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910724925.3A CN110457884A (en) | 2019-08-06 | 2019-08-06 | Target follower method, device, robot and read/write memory medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910724925.3A CN110457884A (en) | 2019-08-06 | 2019-08-06 | Target follower method, device, robot and read/write memory medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110457884A true CN110457884A (en) | 2019-11-15 |
Family
ID=68485173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910724925.3A Pending CN110457884A (en) | 2019-08-06 | 2019-08-06 | Target follower method, device, robot and read/write memory medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110457884A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111053564A (en) * | 2019-12-26 | 2020-04-24 | 上海联影医疗科技有限公司 | Medical equipment movement control method and medical equipment |
CN111941431A (en) * | 2020-09-04 | 2020-11-17 | 上海木木聚枞机器人科技有限公司 | Automatic following method and system for hospital logistics robot and storage medium |
CN112634561A (en) * | 2020-12-15 | 2021-04-09 | 中标慧安信息技术股份有限公司 | Safety alarm method and system based on image recognition |
CN113858216A (en) * | 2021-12-01 | 2021-12-31 | 南开大学 | Robot following method, device and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080285797A1 (en) * | 2007-05-15 | 2008-11-20 | Digisensory Technologies Pty Ltd | Method and system for background estimation in localization and tracking of objects in a smart video camera |
CN101814137A (en) * | 2010-03-25 | 2010-08-25 | 浙江工业大学 | Driver fatigue monitor system based on infrared eye state identification |
CN105760824A (en) * | 2016-02-02 | 2016-07-13 | 北京进化者机器人科技有限公司 | Moving body tracking method and system |
CN107331390A (en) * | 2017-05-27 | 2017-11-07 | 芜湖星途机器人科技有限公司 | Robot voice recognizes the active system for tracking of summoner |
CN108888204A (en) * | 2018-06-29 | 2018-11-27 | 炬大科技有限公司 | A kind of sweeping robot calling device and call method |
-
2019
- 2019-08-06 CN CN201910724925.3A patent/CN110457884A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080285797A1 (en) * | 2007-05-15 | 2008-11-20 | Digisensory Technologies Pty Ltd | Method and system for background estimation in localization and tracking of objects in a smart video camera |
CN101814137A (en) * | 2010-03-25 | 2010-08-25 | 浙江工业大学 | Driver fatigue monitor system based on infrared eye state identification |
CN105760824A (en) * | 2016-02-02 | 2016-07-13 | 北京进化者机器人科技有限公司 | Moving body tracking method and system |
CN107331390A (en) * | 2017-05-27 | 2017-11-07 | 芜湖星途机器人科技有限公司 | Robot voice recognizes the active system for tracking of summoner |
CN108888204A (en) * | 2018-06-29 | 2018-11-27 | 炬大科技有限公司 | A kind of sweeping robot calling device and call method |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111053564A (en) * | 2019-12-26 | 2020-04-24 | 上海联影医疗科技有限公司 | Medical equipment movement control method and medical equipment |
CN111053564B (en) * | 2019-12-26 | 2023-08-18 | 上海联影医疗科技股份有限公司 | Medical equipment movement control method and medical equipment |
CN111941431A (en) * | 2020-09-04 | 2020-11-17 | 上海木木聚枞机器人科技有限公司 | Automatic following method and system for hospital logistics robot and storage medium |
CN111941431B (en) * | 2020-09-04 | 2022-03-08 | 上海木木聚枞机器人科技有限公司 | Automatic following method and system for hospital logistics robot and storage medium |
CN112634561A (en) * | 2020-12-15 | 2021-04-09 | 中标慧安信息技术股份有限公司 | Safety alarm method and system based on image recognition |
CN113858216A (en) * | 2021-12-01 | 2021-12-31 | 南开大学 | Robot following method, device and system |
CN113858216B (en) * | 2021-12-01 | 2022-02-22 | 南开大学 | Robot following method, device and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110457884A (en) | Target follower method, device, robot and read/write memory medium | |
KR102063037B1 (en) | Identity authentication method, terminal equipment and computer readable storage medium | |
JP7351861B2 (en) | System and method for specifying user identifiers | |
KR102573482B1 (en) | Biometric security system and method | |
CN106897675B (en) | Face living body detection method combining binocular vision depth characteristic and apparent characteristic | |
EP3373202B1 (en) | Verification method and system | |
EP3308325B1 (en) | Liveness detection method and device, and identity authentication method and device | |
JP5024067B2 (en) | Face authentication system, method and program | |
EP3951750A1 (en) | Liveness detection safe against replay attack | |
Chew et al. | Sensors-enabled smart attendance systems using NFC and RFID technologies | |
EP3862897A1 (en) | Facial recognition for user authentication | |
CN106033601B (en) | The method and apparatus for detecting abnormal case | |
TW201911130A (en) | Method and device for remake image recognition | |
CN105518708A (en) | Method and equipment for verifying living human face, and computer program product | |
JP2018163096A (en) | Information processing method and information processing device | |
KR102593624B1 (en) | Online Test System using face contour recognition AI to prevent the cheating behaviour and method thereof | |
CN104662561A (en) | Skin-based user recognition | |
CN109120854B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
JP2021531601A (en) | Neural network training, line-of-sight detection methods and devices, and electronic devices | |
CN106471440A (en) | Eye tracking based on efficient forest sensing | |
CN111738199B (en) | Image information verification method, device, computing device and medium | |
CA3049042A1 (en) | System and method for authenticating transactions from a mobile device | |
Zhou et al. | Multi-modal face authentication using deep visual and acoustic features | |
Herlianto et al. | IoT-based student monitoring system for smart school applications | |
CN114004639A (en) | Preferential information recommendation method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191115 |
|
RJ01 | Rejection of invention patent application after publication |