CN110238854A - A kind of robot control method, device, electronic equipment and storage medium - Google Patents
A kind of robot control method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110238854A CN110238854A CN201910533876.5A CN201910533876A CN110238854A CN 110238854 A CN110238854 A CN 110238854A CN 201910533876 A CN201910533876 A CN 201910533876A CN 110238854 A CN110238854 A CN 110238854A
- Authority
- CN
- China
- Prior art keywords
- face
- robot
- image
- facial image
- subgraph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a kind of robot control method, device, electronic equipment and storage mediums, comprising: determines the face subgraph for including in the collected ambient image of robot;It controls the robot and shows the face subgraph;Determine the target facial image in currently displayed face subgraph;The robot is controlled to follow the target facial image corresponding target object progress focus.Due in embodiments of the present invention, electronic equipment is in the ambient image that robot acquires, it can determine everyone face image, target facial image is determined in the face subgraph currently shown, and it controls robot and target facial image corresponding target object progress focus is followed, robot control program disclosed by the invention may be implemented to shield on face, user can intuitively realize robotic vision ability, and since it is determined target object, focus can be carried out to target object to follow, the flexibility for improving robot service, improves user experience.
Description
Technical field
The present invention relates to robotic technology field more particularly to a kind of robot control method, device, electronic equipment and deposit
Storage media.
Background technique
Recently as the fast development of artificial intelligence field, robot application has been arrived in all trades and professions, and robot receives
Mankind commander, executes certain tasks, assists or replace certain work of the mankind.
At work, the interactive object serviced generally there are one, robot is according to the need of interactive object for robot
It asks and provides service for interactive object, but in the prior art, there are when multiple clients in face of robot, in this case it is not apparent that robot
Whom current interactive object is, causes the flexibility with robot interactive poor, can not control robot autonomous steering client,
Experience sense for offering customers service, client is poor.
Summary of the invention
The embodiment of the invention provides a kind of robot control method, device, electronic equipment and storage mediums, to solve
The steering client that robot cannot be autonomous in the prior art is offering customers service, therefore the experience sense of client is also poor asks
Topic.
The embodiment of the invention provides a kind of robot control methods, which comprises
Determine the face subgraph for including in the collected ambient image of robot;
It controls the robot and shows the face subgraph;
Determine the target facial image in currently displayed face subgraph;
The robot is controlled to follow the target facial image corresponding target object progress focus.
Further, the target facial image in the currently displayed face subgraph of the determination includes:
If exist in the face subgraph with the matched the first face image of predesignated facial image, by described the
One face subgraph is determined as the target facial image;Or
If in the face subgraph there is no with the matched the first face image of predesignated facial image, will be described
The maximum second face subgraph of facial size is determined as the target facial image in face subgraph.
Further, the target facial image in the currently displayed face subgraph of the determination further include:
If getting face chooses instruction, the face is chosen into the indicated third party's face image of instruction and is determined as institute
State target facial image.
Further, it is described get face choose instruction include:
If receiving touch operation in the corresponding region of any face subgraph that the robot is shown, determination is got
The face chooses instruction.
Further, the control robot shows that the face subgraph includes:
If including multiple face subgraphs in the ambient image, the size according to face in face subgraph is descending
Sequence, successively in the face subgraph choose preset quantity face subgraph;
Control the face subgraph that the robot shows the preset quantity.
Further, it is determined that after target facial image in shown face subgraph, the method also includes:
If detecting, the target object is not in the ambient image, redefines currently displayed face subgraph
Target facial image as in.
Further, the method also includes:
The robot is controlled in the corresponding display area of the target facial image, shows the target facial image
Character attribute information.
Further, it is determined that controlling the machine after target facial image in currently displayed face subgraph
People shows the face subgraph further include:
The robot is controlled according to preset display effect, highlights the target facial image.
Further, it controls before the robot shows the face subgraph, the method also includes:
Determine that the specified focus of the robot follows function to have turned on.
Further, it controls after the robot shows the face subgraph, the method also includes:
If getting image concealing instruction, the face subgraph that the robot hides display is controlled.
On the other hand, the embodiment of the invention provides a kind of robot controller, described device includes:
Face determining module, for determining the face subgraph for including in the collected ambient image of robot;
Display control module shows the face subgraph for controlling the robot;
Target determination module, for determining the target facial image in currently displayed face subgraph;
Model- following control module carries out coke to the corresponding target object of the target facial image for controlling the robot
Point follows.
Further, the target determination module, if being specifically used for existing and predesignated people in the face subgraph
The first face image is determined as the target facial image by the matched the first face image of face image;If
In the face subgraph there is no with the matched the first face image of predesignated facial image, by the face subgraph
The middle maximum second face subgraph of facial size is determined as the target facial image.
Further, the target determination module refers to face selection if being also used to get face chooses instruction
Indicated third party's face image is enabled to be determined as the target facial image.
Further, the target determination module, if specifically for any face subgraph shown in the robot
Corresponding region receives touch operation, determines that getting the face chooses instruction.
Further, the display control module, if being specifically used for includes multiple face subgraphs in the ambient image,
According to the sequence that the size of face in face subgraph is descending, preset quantity is successively chosen in the face subgraph
Face subgraph;Control the face subgraph that the robot shows the preset quantity.
Further, described device further include:
Detection module, if redefining current institute for detecting that the target object is not in the ambient image
Target facial image in the face subgraph of display.
Further, it is corresponding in the target facial image to be also used to control the robot for the display control module
Display area in, show the character attribute information of the target facial image.
Further, the display control module is specifically used for controlling the robot according to preset display effect, dashes forward
The target facial image is shown out.
Further, described device further include:
Function determining module, for determining that the specified focus of the robot follows function to have turned on.
Further, described device further include:
Hidden module, if controlling the face subgraph that the robot hides display for getting image concealing instruction
Picture.
The embodiment of the invention provides a kind of electronic equipment, including processor, communication interface, memory and communication bus,
Wherein, processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes method and step described in any of the above embodiments.
The embodiment of the invention provides a kind of computer readable storage medium, storage in the computer readable storage medium
There is computer program, the computer program realizes method and step described in any of the above embodiments when being executed by processor.
Since in embodiments of the present invention, electronic equipment can determine everyone in the ambient image that robot acquires
Face image determines target facial image in the face subgraph currently shown, and controls robot to target facial image
Corresponding target object carries out focus and follows.Therefore robot control method provided in an embodiment of the present invention may be implemented on face
Screen, user can intuitively realize robotic vision ability, improve user experience, and user further can also basis
It needs to choose or switch target facial image in the face subgraph of display at any time with wish, and then realizes to target face figure
As the focus of corresponding target object follows, the flexibility of robot service is improved, can preferably be offering customers service,
Improve the experience sense of client.Also, focus is being carried out at any time, controlling robot towards the target pair serviced to target object
As that can make the target object currently serviced experience more preferable in this way, more there is considered feeling.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this
For the those of ordinary skill in field, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is the robot control process schematic diagram that the embodiment of the present invention 1 provides;
Fig. 2 is the robot display screen schematic diagram that the embodiment of the present invention 4 provides;
Fig. 3 is the robot control flow schematic diagram that the embodiment of the present invention 7 provides;
Fig. 4 is the robot control flow schematic diagram that the embodiment of the present invention 8 provides;
Fig. 5 is the robot controller structural schematic diagram that the embodiment of the present invention 9 provides;
Fig. 6 is the electronic devices structure schematic diagram that the embodiment of the present invention 10 provides.
Specific embodiment
The present invention will be describe below in further detail with reference to the accompanying drawings, it is clear that described embodiment is only this
Invention a part of the embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art exist
All other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
Embodiment 1:
Fig. 1 be robot control process schematic diagram provided in an embodiment of the present invention, the process the following steps are included:
S101: the face subgraph for including in the collected ambient image of robot is determined.
Robot control method provided in an embodiment of the present invention is applied to the controller of robot, and also can be applied to can
Control the external equipments such as PC, tablet computer, the server of robot.Image capture device is provided in robot (as imaged
Head), robot acquires ambient image by image capture device, and wherein ambient image can be understood as image capture device acquisition
The image within the vision arrived, for sake of clarity, the image for the face for including in ambient image can be defined as environment map
The face subgraph for including as in.
In addition, the image capture device being arranged in robot can acquire ambient image when acquiring ambient image in real time,
Also the period that acquisition image can be set carries out the acquisition of ambient image according to the period of setting.For example, the period of setting is 30
Second, then after image capture device starting, every the ambient image of acquisition in 30 seconds.
S102: it controls the robot and shows the face subgraph.
Display screen is provided in robot, if detecting face (comprising face i.e. in ambient image in ambient image
Image), it can control robot and show the face subgraph detected on a display screen.Face subgraph in the embodiment of the present invention
As the region where including the position from the crown to neckline of the object recognized.
In specific implementation, it if the image capture device in robot is periodically to acquire ambient image, adopts every time
After collecting ambient image, each of electronic equipment environment-identification image face image, and control robot and show each ring
Face subgraph in the image of border, at this point, the face subgraph in image is also periodically to show.Certainly, in robot
Image capture device can also acquire ambient image, each of ambient image that electronic equipment identification obtains in real time face in real time
Subgraph, and control the face subgraph in each ambient image of robot real-time display.
In specific implementation, when robot shows face subgraph, it can be the arbitrary region in robot display screen
Interior display face subgraph, such as the right area in display screen.Further, it is also possible to show face subgraph in the form of suspended frame
Picture.
S103: the target facial image in currently displayed face subgraph is determined.
After electronic equipment control robot shows face subgraph on a display screen, it can determine in face subgraph
Target facial image.
Wherein it is possible to using any one face subgraph as target facial image, or can be according to the click of user
It operates and determines target facial image, or target face can be chosen in face subgraph according to pre-set selection rule
Image, the pre-set selection rule can be the corresponding face subgraph of the nearest face of selected distance robot as mesh
Facial image is marked, the priority for determining everyone face image is can be, chooses the face subgraph of highest priority as mesh
Mark facial image etc..
S104: it controls the robot and the target facial image corresponding target object progress focus is followed.
Specifically, determining the corresponding target of target facial image in real time after electronic equipment determines target facial image
The location information of object, and then the robot is controlled according to the location information of the target object, which is carried out burnt
Point follows.
For example, the image coordinate system of ambient image can first be determined, then according to target facial image in ambient image
Position, determine coordinate information of the corresponding target object of target facial image in ambient image.Wherein, electronic equipment determines
The image coordinate system of ambient image can be the original that any vertex in four vertex of ambient image is determined as to image coordinate system
Point, the vertex in the upper left corner as the origin of image coordinate system, to be pushed up to the vertex in the ambient image upper left corner to the upper right corner
Positive direction of the direction of point as the x-axis (horizontal axis) of image coordinate, the direction on the vertex on the vertex in the upper left corner to the lower left corner is made
For the negative direction of the y-axis (longitudinal axis) of image coordinate system.It can also be using the central point of ambient image as the original of image coordinate system
Point, using the horizontal direction on the side on the right side of central point to image as the positive direction of the x-axis of image coordinate system, by central point to image
Positive direction etc. of the horizontal direction on the side of top as the y-axis of image coordinate system, image coordinate system numerous to list herein is really
Mode is determined, as long as be corresponding in determining image coordinate system should for unique identification for the pixel that can guarantee in ambient image
The coordinate information of pixel.Coordinate information of the target facial image in ambient image can be in target facial image
Coordinate information of the imago vegetarian refreshments in ambient image.
In a kind of possible embodiment, the seat of each pixel in ambient image can be pre-saved in electronic equipment
The corresponding relationship for marking information and revolute angle, after electronic equipment determines target facial image, determines target face figure
As the coordinate information in ambient image, according to the seat target facial image coordinate information in ambient image and pre-saved
The corresponding relationship for marking information and revolute angle, can determine the corresponding target rotational angle of target facial image.
After electronic equipment determines the corresponding target rotational angle of target facial image, control robot turns according to target
Dynamic angle is rotated.The angle that robot is rotated may include left-right rotation and pitch rotation, in the embodiment of the present invention
In do not repeat them here.
For example, electronic equipment determines that target rotational angle is to turn right 20 degree, electronic equipment is taken to robot transmission
With 20 degree of the control instruction of turning right, after robot receives control instruction, controls and itself turn right 20 degree, after rotation
Robot towards the corresponding target object of target facial image.
Since in embodiments of the present invention, electronic equipment can determine everyone in the ambient image that robot acquires
Face image determines target facial image in the face subgraph currently shown, and controls robot to target facial image
Corresponding target object carries out focus and follows.Therefore robot control method provided in an embodiment of the present invention may be implemented on face
Screen, user can intuitively realize robotic vision ability, improve user experience, and user further can also basis
It needs to choose or switch target facial image in the face subgraph of display at any time with wish, and then realizes to target face figure
As the focus of corresponding target object follows, the flexibility of robot service is improved, can preferably be offering customers service,
Improve the experience sense of client.Also, focus is being carried out at any time, controlling robot towards the target pair serviced to target object
As that can make the target object currently serviced experience more preferable in this way, more there is considered feeling.
Embodiment 2:
In order to keep determining target facial image more acurrate, on the basis of the above embodiments, in embodiments of the present invention,
Target facial image in the currently displayed face subgraph of the determination includes:
If exist in the face subgraph with the matched the first face image of predesignated facial image, by described the
One face subgraph is determined as the target facial image;Or
If in the face subgraph there is no with the matched the first face image of predesignated facial image, will be described
The maximum second face subgraph of facial size is determined as the target facial image in face subgraph.
In embodiments of the present invention, specified facial image can be pre-saved in electronic equipment, the specified facial image example
The facial image etc. of Very Important Person (Very Important Person, VIP) in this way, electronic equipment judges in face subgraph
With the presence or absence of with matched first facial image of predesignated facial image, if it is present determine the first facial image be
Then first facial image is determined as target facial image by the facial image of VIP client, and execute and subsequent follow target face
Control robot may be implemented in this way and preferentially follow VIP client, provide for VIP client for the process of the corresponding target object of image
Service improves the usage experience of VIP client.
It should be noted that preassigned facial image can be one, be also possible to it is multiple, if preassigned
Facial image is multiple, and is existed and preassigned facial image from the face subgraph recognized in ambient image
When the first facial image matched also is multiple, any one first facial image can be chosen as target facial image, it can also
To be to choose the facial image of highest priority in multiple first facial images as target facial image etc..
If the first facial image matched with preassigned facial image is not present in face subgraph, illustrate machine
Device people does not have VIP client within sweep of the eye, can control the visitor of robot following distance nearest (i.e. facial size is maximum) at this time
Family.Electronic equipment identifies the size of everyone face image, is then determined as the maximum second face subgraph of facial size
The target facial image.Specifically, electronic equipment can identify the quantity for the pixel that everyone face image includes, include
The most face subgraph of quantity of pixel be the maximum second face subgraph of size, will be comprising pixel quantity most
Then the second more face subgraphs carry out subsequent following the corresponding target object of target facial image as target facial image
Process, when may be implemented within the scope of robot view field in this way without VIP client, control robot preferentially turns to distance recently
Client, for apart from nearest offering customers service.
Due in embodiments of the present invention, the facial image of the VIP client within the scope of robot view field being determined as target
Facial image determines the facial image of the client nearest apart from robot if not having VIP client within the scope of robot view field
For target facial image.Therefore, the scheme of determining target facial image provided in an embodiment of the present invention is more acurrate, and client's body
It tests more preferable.
Embodiment 3:
It selects whom who sees to realize, improves the operability of administrative staff or client, on the basis of the various embodiments described above,
In embodiments of the present invention, the target facial image in the currently displayed face subgraph of the determination further include:
If getting face chooses instruction, the face is chosen into the indicated third party's face image of instruction and is determined as institute
State target facial image.
Electronic equipment judges whether that receiving face chooses instruction, if do not received, according to the side of above-described embodiment 2
Formula determines target facial image, if received, face is chosen the indicated third party's face image of instruction and is determined as institute
State target facial image.
It should be noted that if defining target facial image in the way of above-described embodiment 2, if receiving later
It has arrived face and has chosen instruction, then needed to redefine target facial image according to face selection instruction, that is, face is chosen
The indicated third party's face image of instruction is determined as target facial image.
In specific implementation, it is described get face choose instruction include:
If receiving touch operation in the corresponding region of any face subgraph that the robot is shown, determination is got
The face chooses instruction.
In embodiments of the present invention, display screen is installed in robot, after electronic equipment determines everyone face image,
Control robot shows everyone face image on a display screen, by judging whether to receive user on a display screen to face
The touch operation of subgraph chooses instruction to determine whether to receive face, which can be a selection operation or frame
Selection operation is not defined the specific implementation of touch operation in the embodiment of the present invention.
Certainly, in addition to above-mentioned implementation, the voice that electronic equipment is also possible to the user for judging whether to receive is chosen
Operation chooses instruction to determine whether to receive face, which can be comprising face subgraph display position
Phonetic order (such as the phonetic order can be for " using the face subgraph shown in first position as target face figure
Picture "), it can be the phonetic order comprising the corresponding character attribute information of face subgraph (for example, the phonetic order can be
" using the face subgraph of Zhang San as target facial image "), if it is comprising the corresponding character attribute information of face subgraph,
The corresponding relationship of face and character attribute information can also be then provided in electronic equipment in advance.
Due in embodiments of the present invention, everyone face image being shown that on the display screen of robot, user passes through
Touch operation is carried out on a display screen and chooses instruction to send face, and therefore, user's operation is easy, improves user experience.
Embodiment 4:
In order to simplify the content on robot display screen, on the basis of the various embodiments described above, in embodiments of the present invention,
The control robot shows that the face subgraph includes:
If including multiple face subgraphs in the ambient image, the size according to face in face subgraph is descending
Sequence, successively in the face subgraph choose preset quantity face subgraph;
Control the face subgraph that the robot shows the preset quantity.
In embodiments of the present invention, electronic equipment is ranked up according to the size of face in face subgraph, wherein can be with
Be quantity of the case face sub-picture pack containing pixel number be ranked up, it is descending choose preset quantity face subgraph
As being shown on the display screen of robot.Preset quantity can be any amounts such as 5,6, which can be management
Personnel can be robot pre-set quantity before factory, or robot according to the quantity of display demand setting
Quantity selection function is configured, after quantity selection function is opened, display number configuration interface, user on the display screen of robot
Can input in quantity or configuration interface in configuration interface can show that quantity to be selected is selected for user, and user passes through
Quantity is selected to realize the configuration of face subgraph display number, not to the concrete configuration mode of preset quantity in the embodiment of the present invention
It is defined.Fig. 2 is robot display screen schematic diagram, as shown in Fig. 2, showing 5 people's face images, Yong Huke on display screen
To carry out touch operation to this 5 face subgraphs, instruction is chosen to send face.
Embodiment 5:
Determining target object is possible to leave robot within sweep of the eye in the above embodiments, in order to guarantee mesh
Mark object leave robot within sweep of the eye after, control robot continue to follow next target object, in the embodiment of the present invention
In, after determining the target facial image in shown face subgraph, the method also includes:
If detecting, the target object is not in the ambient image, redefines currently displayed face subgraph
Target facial image as in.
Electronic equipment control robot acquires ambient image in real time, after determining target facial image, controls robot
Focus is carried out to the corresponding target object of target facial image to follow, and continues to acquire ambient image, when the environment acquired at this time
When not occurring target object in image, the target facial image in currently displayed face subgraph is redefined.
In addition, the target object being followed be possible to it is of short duration leave robot within sweep of the eye after continue back at machine
Within sweep of the eye, continuation interacts device people with robot, if leading to target object weight because the of short duration of target object leaves
It is new to determine, then it will affect the service experience of the of short duration target object left.It, can be pre- in electronic equipment in order to avoid the above problem
First setting time length, wherein presetting time span can be for any time length, such as 2 seconds, 3 seconds etc..When acquisition
When not occurring target object in ambient image, whether judgement presets in the ambient image acquired in time span and does not detect
To target object, if it is, illustrating that target object has left, the interaction of robot and the target object left terminates, at this time
Redefine the target facial image in currently displayed face subgraph.If acquired in time span presetting
Target object is detected in a certain frame ambient image, then continues to follow the target object.
Determine that the process of target facial image is identical as process described in above-described embodiment in the embodiment of the present invention, herein
No longer repeated.
Electronic equipment determines mesh according to the ambient image of the image capture device acquisition in the robot currently obtained in real time
Mark facial image, wherein the mode of image capture device acquisition ambient image can be periodical acquisition, be also possible to adopt in real time
Collection, the acquisition mode of ambient image is identical as mode described in above-described embodiment, is no longer repeated herein.
Embodiment 6:
In order to improve the relevant information for the target facial image for showing screen display, on the basis of the various embodiments described above,
In embodiments of the present invention, the method also includes:
The robot is controlled in the corresponding display area of the target facial image, shows the target facial image
Character attribute information.
In embodiments of the present invention, if target facial image be with the matched facial image of predesignated facial image,
When electronic equipment pre-saves specified facial image, the corresponding character attribute information of specified facial image, personage can also be saved
Attribute information includes the identification information of personage, such as characters name or work number information, and character attribute information can also include people
The information such as physical property other, post, hobby.After electronic equipment determines target facial image, also control robot is in target face
In the corresponding display area of image, the character attribute information of displaying target facial image.The corresponding display area can be mesh
Display area or the display area of lower section etc. above the display area of facial image are marked, which can
To be the display area neighbouring with the display area of target facial image, the personage that family is clearly seen that display can be used in this way
Attribute information is the corresponding character attribute information of which facial image.
If target facial image be not with the matched facial image of predesignated facial image, determining target face
After image, the character attribute information of the corresponding target object of target facial image can be inputted by user, then controls machine
People is in the corresponding display area of target facial image, the character attribute information of displaying target facial image.
In addition, can also be shown according to the demand of user other than the character attribute information of displaying target facial image
The character attribute information of everyone face image.Specifically, everyone corresponding object of face image can be inputted by user
Then character attribute information controls robot in everyone the corresponding display area of face image, shows that corresponding personage belongs to
Property information.
Since in embodiments of the present invention, after electronic equipment determines target facial image, control robot is being shown
Screen display target facial image and in the corresponding display area of target facial image, the personage of displaying target facial image
Attribute information, so that the relevant information of the target facial image of display screen display is more complete.
Embodiment 7:
In order to further increase user experience, on the basis of the various embodiments described above, in embodiments of the present invention, determination is worked as
After target facial image in preceding shown face subgraph, controls the robot and show that the face subgraph also wraps
It includes:
The robot is controlled according to preset display effect, highlights the target facial image.
In embodiments of the present invention, after determining target facial image, according to preset effect by target face figure
As being shown on the display screen of robot.The preset effect can be amplification and show, be highlighted, as long as can protrude aobvious
Show target facial image.As shown in Fig. 2, the facial image that the top is shown is target facial image, it is aobvious by distinguishing
The mode for showing first object facial image makes user that can clearly know robot according to the facial image of display screen display
Which the target object currently followed is.To further increase user experience.
Based on any embodiment, control before the robot shows the face subgraph, the method is also wrapped
It includes:
Determine that the specified focus of the robot follows function to have turned on.
The specified focus that can be set on electronic equipment can choose follows function button, user by choose specified focus with
The specified focus for sending unlatching robot to electronic equipment with function button follows the instruction of function, and electronic equipment receives this
After instruction, carry out subsequent control robot the step of.When electronic equipment does not receive the instruction, after control robot does not execute
Continuous step.
In a kind of possible embodiment, control after the robot shows the face subgraph, the method
Further include:
If getting image concealing instruction, the face subgraph that the robot hides display is controlled.
The image concealing button that can be chosen can be set on electronic equipment, user sends by choosing image concealing button
Image concealing instruction, after electronic equipment receives image concealing instruction, control robot hides face of display screen display
Image.Certainly, if also showing character attribute information on display screen, also character attribute information is hidden together.User does not select
When taking image concealing button, that is, do not send image concealing instruction when, electronic equipment do not control robot hide face subgraph
Picture.
Below with reference to a specific embodiment, robot control flow provided in an embodiment of the present invention is carried out specifically
Robot control flow bright, that embodiment shown in Fig. 3 provides, comprising the following steps:
S201: specified facial image and corresponding character attribute data input electronic equipment are saved in advance.
S202: judging whether that the specified focus for receiving robot follows function to open control instruction, if not, carrying out
S203, if so, carrying out S204.
S203: control robot enters other modes.
S204: control robot enters specified focus follow the mode, under the specified focus follow the mode of robot, control
Robot shows everyone face image, and according to preset effect displaying target facial image, then respectively execute S205 and
S207。
S205: if getting face chooses instruction, face is chosen into the indicated third party's face image of instruction and is determined as
The target facial image;If not getting face chooses instruction, and exists and predesignated people in the face subgraph
The first face image is determined as the target facial image by the matched the first face image of face image;If described
In face subgraph there is no with the matched the first face image of predesignated facial image, by people in the face subgraph
The maximum second face subgraph of face size is determined as the target facial image.After determining target facial image, if receiving
It has arrived after face chooses instruction, has then needed to choose instruction according to face and redefine target facial image, that is, by face
It chooses the indicated third party's face image of instruction and is determined as target facial image.
S206: control robot carries out focus to the corresponding target object of target facial image and follows.
S207: judging whether to receive image concealing instruction, if so, S208 is carried out, if not, carrying out S209.
S208: control robot hides the face subgraph on display screen.
S209: continue to show the face subgraph on display screen.
Embodiment 8:
Fig. 4 be robot control flow schematic diagram provided in an embodiment of the present invention, the process the following steps are included:
S301: specified facial image and corresponding character attribute data input electronic equipment are saved in advance.
Specified facial image in the embodiment of the present invention can be the facial image of leader or the face figure of Very Important Person
As etc..
S302: judging whether that the specified focus for receiving robot follows function to open control instruction, if not, carrying out
S303, if so, carrying out S304.
S303: control robot enters other modes.
S304: judge in ambient image with the presence or absence of with the matched the first face image of predesignated facial image, such as
Fruit is to carry out S305, if not, carrying out S306.
S305: the first face image is determined as target facial image.
S306: the maximum second face subgraph of facial size is determined as target facial image.
S307: control robot carries out focus to the corresponding target object of target facial image and follows.
S308: control robot shows everyone face image, and according to preset effect displaying target facial image.
S309: judging whether that getting face chooses instruction, if so, S310 is carried out, if not, carrying out S311.
S310: face is chosen into the indicated third party's face image of instruction and is determined as new target facial image, to new
The corresponding target object of target facial image carry out focus and follow.
S311: continue to follow former target object.
S312: judging whether to receive image concealing instruction, if so, S313 is carried out, if not, carrying out S314.
S313: control robot hides the face subgraph on display screen.
S314: continue to show the face subgraph on display screen.
While controlling robot display face subgraph, it can show that the instruction of " putting me to hide " is pressed on a display screen
Button, user send image concealing instruction by touching the instruction button of " putting me to hide ".
The sequence that executes of step S309 is not limited in the present embodiment, any moment receives face and chooses instruction, i.e., by people
Face chooses the indicated third party's face image of instruction and is determined as new target facial image, corresponding to new target facial image
Target object carry out focus follow.
Embodiment 9:
Fig. 5 is robot controller structural schematic diagram provided in an embodiment of the present invention, and described device includes:
Face determining module 41, for determining the face subgraph for including in the collected ambient image of robot;
Display control module 42 shows the face subgraph for controlling the robot;
Target determination module 43, for determining the target facial image in currently displayed face subgraph;
Model- following control module 44 carries out the corresponding target object of the target facial image for controlling the robot
Focus follows.
The target determination module 43, if being specifically used for existing and predesignated facial image in the face subgraph
The first face image is determined as the target facial image by the first face image matched;If the face
In subgraph there is no with the matched the first face image of predesignated facial image, by face ruler in the face subgraph
Very little maximum second face subgraph is determined as the target facial image.
It is signified to be chosen instruction if being also used to get face chooses instruction by the target determination module 43 for the face
The third party's face image shown is determined as the target facial image.
The target determination module 43, if specifically for the corresponding area of any face subgraph shown in the robot
Domain receives touch operation, determines that getting the face chooses instruction.
The display control module 42, if being specifically used in the ambient image including multiple face subgraphs, according to people
The descending sequence of the size of face in face image successively chooses face of preset quantity in the face subgraph
Image;Control the face subgraph that the robot shows the preset quantity.
The target determination module 43, if being also used to detect, the target object is not in the ambient image, weight
Newly determine the target facial image in currently displayed face subgraph.
The display control module 42 is also used to control the robot in the corresponding viewing area of the target facial image
In domain, the character attribute information of the target facial image is shown.
The display control module 42 is specifically used for controlling the robot according to preset display effect, highlights
The target facial image.
Described device further include:
Function determining module 45, for determining that the specified focus of the robot follows function to have turned on.
Described device further include:
Hidden module 46, if controlling face that the robot hides display for getting image concealing instruction
Image.
Embodiment 10:
Based on the same inventive concept, a kind of electronic equipment is additionally provided in the embodiment of the present invention, due to above-mentioned electronic equipment
The principle solved the problems, such as is similar to robot control method, therefore the implementation of above-mentioned electronic equipment may refer to the implementation of method,
Overlaps will not be repeated.
Fig. 6 is electronic devices structure schematic diagram provided in an embodiment of the present invention, as shown in Figure 6, comprising: processor 501 leads to
Believe interface 502, memory 503 and communication bus 504, wherein processor 501, communication interface 502, memory 503 pass through communication
Bus 504 completes mutual communication;
It is stored with computer program in the memory 503, when described program is executed by the processor 501, so that
The processor 501 executes following steps:
Determine the face subgraph for including in the collected ambient image of robot;
It controls the robot and shows the face subgraph;
Determine the target facial image in currently displayed face subgraph;
The robot is controlled to follow the target facial image corresponding target object progress focus.
Electronic equipment provided in an embodiment of the present invention be specifically as follows desktop computer, portable computer, smart phone,
Tablet computer, personal digital assistant (Personal Digital Assistant, PDA), network side equipment etc., are also possible to machine
Device people itself.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component
Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard
Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just
It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Communication interface 502 is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy
The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also
To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit, network processing unit (Network
Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), dedicated collection
At circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hard
Part component etc..
When processor executes the program stored on memory in embodiments of the present invention, now determine that robot acquires in fact
To ambient image in include face subgraph;It controls the robot and shows the face subgraph;Determination is currently shown
The target facial image in face subgraph shown;The robot is controlled to the corresponding target object of the target facial image
Focus is carried out to follow.Embodiment 10:
Based on the same inventive concept, the embodiment of the invention also provides a kind of computers to store readable storage medium storing program for executing, described
The computer program that can be executed by electronic equipment is stored in computer readable storage medium, when described program is set in the electronics
When standby upper operation, so that the electronic equipment realizes following steps when executing:
Determine the face subgraph for including in the collected ambient image of robot;
It controls the robot and shows the face subgraph;
Determine the target facial image in currently displayed face subgraph;
The robot is controlled to follow the target facial image corresponding target object progress focus.
It is solved the problems, such as due to processor in the computer program stored on executing above-mentioned computer readable storage medium
Principle is similar to robot control method, therefore processor is in the computer journey for executing above-mentioned computer-readable recording medium storage
The implementation of sequence may refer to the implementation of method, and overlaps will not be repeated.
Above-mentioned computer readable storage medium can be any usable medium that the processor in electronic equipment can access
Or data storage device, including but not limited to magnetic storage such as floppy disk, hard disk, tape, magneto-optic disk (MO) etc., optical memory
Such as CD, DVD, BD, HVD and semiconductor memory such as ROM, EPROM, EEPROM, nonvolatile memory (NAND
FLASH), solid state hard disk (SSD) etc..
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of robot control method, which is characterized in that the described method includes:
Determine the face subgraph for including in the collected ambient image of robot;
It controls the robot and shows the face subgraph;
Determine the target facial image in currently displayed face subgraph;
The robot is controlled to follow the target facial image corresponding target object progress focus.
2. the method as described in claim 1, which is characterized in that the target in the currently displayed face subgraph of the determination
Facial image includes:
If in the face subgraph exist with the matched the first face image of predesignated facial image, will be described the first
Face image is determined as the target facial image;Or
If in the face subgraph there is no with the matched the first face image of predesignated facial image, by the face
The maximum second face subgraph of facial size is determined as the target facial image in subgraph.
3. method according to claim 1 or 2, which is characterized in that in the currently displayed face subgraph of the determination
Target facial image further include:
If getting face chooses instruction, the face is chosen into the indicated third party's face image of instruction and is determined as the mesh
Mark facial image.
4. method as claimed in claim 3, which is characterized in that it is described get face choose instruction include:
If receiving touch operation in the corresponding region of any face subgraph that the robot is shown, determine described in getting
Face chooses instruction.
5. the method as described in claim 1, which is characterized in that the control robot shows the face sub-picture pack
It includes:
If in the ambient image including multiple face subgraphs, according to descending suitable of the size of face in face subgraph
Sequence successively chooses the face subgraph of preset quantity in the face subgraph;
Control the face subgraph that the robot shows the preset quantity.
6. the method as described in claim 1, which is characterized in that determine the target face in currently displayed face subgraph
After image, controls the robot and shows the face subgraph further include:
The robot is controlled according to preset display effect, highlights the target facial image.
7. the method as described in claim 1, which is characterized in that it controls after the robot shows the face subgraph,
The method also includes:
If getting image concealing instruction, the face subgraph that the robot hides display is controlled.
8. a kind of robot controller, which is characterized in that described device includes:
Face determining module, for determining the face subgraph for including in the collected ambient image of robot;
Display control module shows the face subgraph for controlling the robot;
Target determination module, for determining the target facial image in currently displayed face subgraph;
Model- following control module, for control the robot to the corresponding target object of the target facial image carry out focus with
With.
9. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing
Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes the described in any item method steps of claim 1-7
Suddenly.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium
Program realizes claim 1-7 described in any item method and steps when the computer program is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910533876.5A CN110238854A (en) | 2019-06-19 | 2019-06-19 | A kind of robot control method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910533876.5A CN110238854A (en) | 2019-06-19 | 2019-06-19 | A kind of robot control method, device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110238854A true CN110238854A (en) | 2019-09-17 |
Family
ID=67888276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910533876.5A Pending CN110238854A (en) | 2019-06-19 | 2019-06-19 | A kind of robot control method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110238854A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112518750A (en) * | 2020-11-30 | 2021-03-19 | 深圳优地科技有限公司 | Robot control method, robot control device, robot, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325491A (en) * | 2008-07-28 | 2008-12-17 | 北京中星微电子有限公司 | Method and system for controlling user interface of instant communication software |
CN104732210A (en) * | 2015-03-17 | 2015-06-24 | 深圳超多维光电子有限公司 | Target human face tracking method and electronic equipment |
US20150235073A1 (en) * | 2014-01-28 | 2015-08-20 | The Trustees Of The Stevens Institute Of Technology | Flexible part-based representation for real-world face recognition apparatus and methods |
CN105100579A (en) * | 2014-05-09 | 2015-11-25 | 华为技术有限公司 | Image data acquisition processing method and related device |
CN108009521A (en) * | 2017-12-21 | 2018-05-08 | 广东欧珀移动通信有限公司 | Humanface image matching method, device, terminal and storage medium |
-
2019
- 2019-06-19 CN CN201910533876.5A patent/CN110238854A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325491A (en) * | 2008-07-28 | 2008-12-17 | 北京中星微电子有限公司 | Method and system for controlling user interface of instant communication software |
US20150235073A1 (en) * | 2014-01-28 | 2015-08-20 | The Trustees Of The Stevens Institute Of Technology | Flexible part-based representation for real-world face recognition apparatus and methods |
CN105100579A (en) * | 2014-05-09 | 2015-11-25 | 华为技术有限公司 | Image data acquisition processing method and related device |
CN104732210A (en) * | 2015-03-17 | 2015-06-24 | 深圳超多维光电子有限公司 | Target human face tracking method and electronic equipment |
CN108009521A (en) * | 2017-12-21 | 2018-05-08 | 广东欧珀移动通信有限公司 | Humanface image matching method, device, terminal and storage medium |
Non-Patent Citations (1)
Title |
---|
黄轩: "《移动电子商务安全研究——模糊逻辑与身份识别》", 31 May 2016, 西安电子科技大学出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112518750A (en) * | 2020-11-30 | 2021-03-19 | 深圳优地科技有限公司 | Robot control method, robot control device, robot, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104969163B (en) | Methods of exhibiting, device and the electronic equipment of application interface | |
CN113093984B (en) | Device and method for accessing common device functions | |
US10082886B2 (en) | Automatic configuration of an input device based on contextual usage | |
CN118796082A (en) | User interface for a watch | |
US20150286391A1 (en) | System and method for smart watch navigation | |
US11169688B2 (en) | Message processing method, message viewing method, and terminal | |
CN104866082A (en) | User behavior based reading method and device | |
CN108564274B (en) | Guest room booking method and device and mobile terminal | |
US20220365667A1 (en) | User interfaces for managing accessories | |
US20180181263A1 (en) | Uninterruptable overlay on a display | |
CN106557672A (en) | The solution lock control method of head mounted display and device | |
CN112214112A (en) | Parameter adjusting method and device | |
CN107239222A (en) | The control method and terminal device of a kind of touch-screen | |
CN107704190A (en) | Gesture identification method, device, terminal and storage medium | |
CN103268151B (en) | A kind of data processing equipment and the method starting specific function thereof | |
TWI646526B (en) | Sub-screen distribution controlling method and device | |
CN108762626B (en) | Split-screen display method based on touch all-in-one machine and touch all-in-one machine | |
JP2016033726A (en) | Electronic apparatus, touch screen control method, and program | |
CN109271027A (en) | Page control method, device and electronic equipment | |
CN110238854A (en) | A kind of robot control method, device, electronic equipment and storage medium | |
CN103870117B (en) | A kind of information processing method and electronic equipment | |
CN105955634A (en) | Mobile intelligent terminal screenshot method and screenshot system | |
CN106547339B (en) | Control method and device of computer equipment | |
CN105867860B (en) | A kind of information processing method and electronic equipment | |
CN105892884A (en) | Screen direction determining method and device, and mobile device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190917 |
|
RJ01 | Rejection of invention patent application after publication |