CN110465937A - Synchronous method, image processing method, man-machine interaction method and relevant device - Google Patents

Synchronous method, image processing method, man-machine interaction method and relevant device Download PDF

Info

Publication number
CN110465937A
CN110465937A CN201910565031.4A CN201910565031A CN110465937A CN 110465937 A CN110465937 A CN 110465937A CN 201910565031 A CN201910565031 A CN 201910565031A CN 110465937 A CN110465937 A CN 110465937A
Authority
CN
China
Prior art keywords
movement
image information
image
user
intelligent robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910565031.4A
Other languages
Chinese (zh)
Inventor
车宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910565031.4A priority Critical patent/CN110465937A/en
Publication of CN110465937A publication Critical patent/CN110465937A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The present invention provides a kind of synchronous method applied to bio-identification, applied to intelligent robot, the described method includes: obtaining the first image information of its current environment and exporting the first image information, the first image information is for showing user so that the user executes the first movement according to the first image information;Obtain the analysis result that the second image information of period of first movement is executed to user;According to the analysis as a result, synchronous execute and matched second movement of first movement.The present invention also provides a kind of image processing method, man-machine interaction method, man-machine interactive system, computer and readable storage medium storing program for executing.

Description

Synchronous method, image processing method, man-machine interaction method and relevant device
Technical field
The present invention relates to field of biological recognition more particularly to a kind of synchronous method, image processing method, man-machine interaction methods And relevant device.
Background technique
As the relevant technologies of intelligent robot develop, robot currently on the market can be by being previously written phase It closes program and executes some basic operations to replace manpower, such as sweeping robot, robot of cooking etc..With the market demand, Also occur that the robot type of human-computer interaction can be carried out successively, simple dialog can be carried out according to the language library and true man prestored. And with the development of human-computer interaction technology, existing intelligent robot can arrange in pairs or groups sensor, be received by sensor true The specific action of people, such as shake head, blink, wave etc. as input instruction, it is corresponding according to above-mentioned input instruction execution Movement, such as above-mentioned technology are widely used in amusement game.
But above-mentioned man-machine interaction mode is higher to the dependence of sensor, it is also higher to transducer sensitivity requirement, So that it is also higher to the instruction processing accuracy requirement for carrying out the intelligent robot that instruction is completed in collocation therewith, lead to intelligence accordingly Robot architecture's processing procedure is complicated, service efficiency is low, then intelligent robot cost is caused to remain high.
Summary of the invention
One aspect of the present invention provides a kind of synchronous method applied to bio-identification, is applied to intelligent robot, the side Method includes:
It obtains the first image information of its current environment and exports the first image information, the first image Information is for showing user so that the user executes the first movement according to the first image information;
Obtain the analysis result that the second image information of period of first movement is executed to user;
According to the analysis as a result, synchronous execute and matched second movement of first movement.
Another aspect of the present invention provides a kind of image processing method applied to bio-identification, sets applied to image procossing Standby, described image processing method includes the following steps:
Obtain the second image information that user executes the period of the first movement according to the first image information;
Second image information is analyzed and generates analysis as a result, exporting the analysis result to intelligence machine People, so that the intelligent robot executes and matched second movement of first movement according to the analysis result is synchronous.
Another aspect of the present invention provides a kind of man-machine interaction method applied to bio-identification, comprising:
Obtain the first image information of current environment;
The first image synchronizing information is showed into user, so that user can execute according to the first image information First movement;
The second image information that the user executes the period of first movement is obtained, and second image is believed Breath is analyzed to generate analysis result;
It is executed and matched second movement of first movement according to the analysis result to second image information is synchronous.
Another aspect of the present invention provides a kind of man-machine interactive system applied to bio-identification, comprising:
Intelligent robot, the intelligent robot are used to obtain the first image information of its current environment and will be described First image information is transmitted to synchronizer;
The synchronizer, for the first image synchronizing information to be showed user, so that user can be according to institute It states the first image information and executes the first movement;
Image processing equipment, the second image information for executing the period of first movement for obtaining the user, And second image information is analyzed to generate analysis as a result, by being exported to the analysis result of second image information To the intelligent robot;
The intelligent robot executes and first movement according to the analysis result to second image information is synchronous Matched second movement.
Another aspect of the present invention provides a kind of computer, and the computer includes processor and memory, the processor Such as above-mentioned method and step is realized when for executing the computer program stored in memory.
Another aspect of the present invention provides a kind of computer readable storage medium, is stored thereon with computer program or instruction, The computer program or instruction realize such as above-mentioned method and step when being executed by processor.
Synchronous method provided in an embodiment of the present invention applied to bio-identification is applied to intelligent robot, can obtain it Itself current environment simultaneously export the first image information, first image information for show user so that user according to First image information execute first movement, and obtain user execute first movement when the second image information, according to this second The analysis result of image information allows intelligent robot to synchronize execution and matched second movement of the first movement of user. I.e. user can complete corresponding task with the movement of remote control intelligent robot, especially some operation intensities it is higher, danger The dangerous higher operating location of coefficient replaces user job by intelligent robot, has ensured user security.Also, the present invention is real The synchronous method for applying example offer, is based only on and analyzes the first image information, without relying on sensing device, advantageously reduce and set Standby cost, therefore be conducive to equipment and largely come into operation, danger coefficient higher operation neck higher especially for operation intensity Domain, Practical significance with higher.
Detailed description of the invention
Fig. 1 is the structural schematic diagram for the man-machine interactive system that embodiment one provides.
Fig. 2 is the timing diagram of the man-machine interactive system course of work in Fig. 1.
Fig. 3 is the step flow diagram for the man-machine interaction method that embodiment two provides.
Fig. 4 is the step flow diagram for the synchronous method that embodiment three provides.
Fig. 5 is another steps flow chart schematic diagram for the synchronous method that embodiment three provides.
Fig. 6 is another steps flow chart schematic diagram for the synchronous method that embodiment three provides.
Fig. 7 is the step flow diagram for the image processing method that example IV provides.
Fig. 8 is the modular structure schematic diagram of intelligent robot provided in an embodiment of the present invention.
The present invention that the following detailed description will be further explained with reference to the above drawings.
Specific embodiment
Embodiment one
Referring to Fig. 1, being the structural schematic diagram of the man-machine interactive system 10 provided in this embodiment applied to bio-identification. Man-machine interactive system 10 provided in this embodiment, comprising:
Intelligent robot 11, for obtaining the first image information of 11 current environment of intelligent robot and by the first figure As information is transmitted to synchronizer 12;
Synchronizer 12 shows user 20 for synchronizing the first image information, so that user 20 can be according to first Image information executes the first movement;
Image processing equipment 13, the second image information for executing the period of the first movement for obtaining user 20, and it is right Second image information is analyzed to generate analysis as a result, exporting the analysis result to the second image information to intelligent robot 11, so that intelligent robot 11 can execute and the first movement matched the according to the analysis result to the second image information is synchronous Two movements.
In an embodiment, man-machine interactive system 10 is applied to Entertainment Scene, such as intelligent robot 11 according to user 20 synchronous the second movements of execution of the first movement executed, to complete Mission Objective;In another embodiment, intelligent robot 11 is answered For in some high-intensitive or high-risk operations task, first remotely being executed by user 20 and is acted in industrial operations scene, and intelligence Energy robot 11 is acted in synchronous execute of operating location with the first movement matched second, with the task of fulfiling assignment.The present embodiment In, to be applied to be illustrated in industrial operations scene.
Please refer to Fig. 1 and Fig. 2, during the work time, intelligent robot 11 is invested in work to man-machine interactive system 10 Industry place (such as the high-risk operations high intensity such as deep-sea, steep cliff place), user 20 then can be at the position far from operating location.Intelligence Energy robot 11 can obtain the image or/and video of the environment that it is presently in, and the image or/and video are the first image letter First image information is transmitted to synchronizer 12 by breath, intelligent robot 11.In another embodiment, intelligent robot 11 can also The acoustic information of present position is obtained, to meet the needs for the application scenarios for needing to carry out operation according to sound.
Synchronizer 12 is that can carry out wired or wireless communication with intelligent robot 11, and can carry out setting for image displaying Standby, in the present embodiment, synchronizer is VR glasses, and portability is higher, is worn on 20 head of user when in use, for by the One image information, which synchronizes, shows user.Synchronization described herein, it is identical to be not necessarily referring to time point, only by the first image information The time of transmission ignores.
User 20, can be according to current when knowing 11 current environment situation of intelligent robot by synchronizer 12 Environmental aspect (namely first image information) is analyzed and determined that execute the first movement, such as intelligent robot 11 is carrying out There are target salvage objects in current environment in deep-sea salvage, then 20 analog of user, which executes, grabs the target and salvage The movement (namely first movement) of object.
Image processing equipment 13 is used to obtain the second image information that user 20 executes the period of the first movement, then can be with Understand, image processing equipment 13 need to be in the position of relative close with user 20 so as to the acquisition of the second image information.This In embodiment, the second image information is the video information for the period that user 20 executes the first movement, which includes more The continuous image of frame.
Further, image processing equipment 13 can analyze the second image information to generate analysis as a result, specific point Analysis process are as follows:
It determines multiple first key points of 20 human body of user, extracts in video information all the of human body in each frame image The location information of one key point, by the fortune of the location information of the first key points all in each frame image and each first key point Dynamic change information is exported as analysis result to intelligent robot.Wherein, motion change information is that each first key point exists Change in location information in each frame image of video information.
For example, the first key point chosen includes a1, a2, a3, wherein a1 is elbow joint, and a2 is at shoulder joint, and a3 is At knee joint, certainly, in practical operation, in order to guarantee higher homework precision, the first more key point can determine whether, wherein Determine that the position of the first key point is not construed as limiting.In the present embodiment, joint is preferentially chosen in the selection of the first key point, because of people During exercise, the position change amount of joint is relatively large for body, then the position of joint, which opposite can become apparent from, directly reflects the One movement.In the present embodiment, the location information of each first key point can be indicated with coordinate, due to determining the first movement master The relative coordinate of each first key point is depended on, rather than depends on the absolute coordinate of each key point, therefore to coordinate system Selection be not particularly limited.
Above-mentioned video information includes multiple image, in same frame image, the relative position table of each first key point The first movement of user 20 at this time is levied, and coordinate knots modification of same first key point in the image of different frame (is also ascended the throne Set change information) then reflect the movement tendency of the first movement.Therefore, image processing equipment 13 is exported to intelligent robot 11 Analysis structure includes the location information (coordinate representation) and each first key of all first key points in each frame image Motion change information (changes in coordinates amount expression) of the point in each frame image.
Intelligent robot 11 is multiple with one-to-one second key point of the first key point for determining, respectively with first Key point a1 corresponding second key point b1, second key point b2 corresponding with the first key point a2 and with a3 pairs of the first key point The the second key point b3 answered.Similarly, in practical operation, in order to guarantee higher homework precision, it can determine whether that more second is crucial Point, wherein determine that the position of the second key point is not construed as limiting.
In the present embodiment, the shape fabricating of the epimorph apery body of intelligent robot 11 is formed;In other embodiments, intelligence The shape of energy robot can also have relatively large difference in the shape of human body.
Location information and each first key point of the intelligent robot 11 according to the first key points all in each frame image Motion change information, synchronize the position of corresponding mobile each second key point to execute the second movement, specifically:
Intelligent robot 11 according to the relative position coordinates of the first key point all in each frame image, it is corresponding calculate with it is upper The relative position of corresponding each second key point of the first key point is stated, then each the of output order control intelligent robot 11 Two key points form above-mentioned relative position coordinates, realize the first movement when intelligent robot 11 has imitated the frame image. And to form a succession of continuous movement, just need to according to each first key point different frame motion change information (namely sit Mark variable quantity) correspond to mobile each second key point of output order, movement performed by above-mentioned intelligent robot 11 is the Two movements then by the above process realize that the second movement is matched with the first movement.
Applied to the man-machine interaction method of bio-identification, the first movement of user can recognize, by determining user's human body Second key point of the first key point and intelligent robot, should be to each second key point by the coordinate pair of each first key point Coordinate with realize intelligent robot make with first movement it is matched second movement so that man-machine interactive system can realize user Corresponding task is completed in the movement of remote control intelligent robot, especially some operation intensities are higher, danger coefficient is higher Operating location, by intelligent robot replace user job, ensured user security.Also, it is provided in an embodiment of the present invention Man-machine interactive system is based only on and analyzes the first image information, without relying on sensing device, greatly reduces whole system Cost of manufacture, be conducive to it and largely come into operation, the higher operation neck of, danger coefficient higher especially for operation intensity Domain, Practical significance with higher.
Embodiment two
Referring to Fig. 3, the man-machine interaction method provided in this embodiment applied to bio-identification, is applied to such as embodiment one In the man-machine interactive system 10, man-machine interaction method includes:
Step S11 obtains the first image information of current environment;
The first image synchronizing information is showed user by step S12, so that user can be according to the first image Information executes the first movement;
Step S13 obtains the second image information that the user executes the period of first movement, and to described the Two image informations are analyzed to generate analysis result;
Step S14 synchronizes execution according to the analysis result to second image information and first movement is matched Second movement.
In above-mentioned steps S13, the second image information is the video information that user 20 executes the first actuation time section, the view It include multiple image in frequency information;
In step S13, second image information is analyzed to generate analysis result specifically:
It determines multiple first key points of 20 human body of user, extracts in video information all the of human body in each frame image The location information of one key point, by the fortune of the location information of the first key points all in each frame image and each first key point Dynamic change information is as analysis result.
Wherein, the motion change information of each the first key point is each frame of each first key point in video information Change in location information in image.
Further, in step S14, the process of the second movement is executed based on the analysis results, as described in embodiment one, this Place repeats no more.
It is understood that the man-machine interaction method provided in this embodiment applied to bio-identification, can recognize user's First movement, it is crucial by each first by determining the first key point of user's human body and the second key point of intelligent robot It is dynamic with the first movement matched second that the coordinate pair of point should realize that intelligent robot is made to the coordinate of each second key point Make, so that man-machine interactive system can realize the movement of user's remote control intelligent robot, completes corresponding task, especially exist Some operation intensities are higher, the higher operating location of danger coefficient, replace user job by intelligent robot, ensured user Safety.Also, man-machine interactive system provided in an embodiment of the present invention is based only on and analyzes the first image information, without according to Rely sensing device, greatly reduces the cost of manufacture of whole system, be conducive to it and largely come into operation, it is strong especially for operation Spend higher, the higher field of operation of danger coefficient, Practical significance with higher.
Embodiment three
Referring to Fig. 4, the synchronous method provided in this embodiment applied to bio-identification, applied to above-mentioned intelligence machine People 11;Synchronous method includes:
Step S21 obtains the first image information of its current environment and exports the first image information, described First image information is for showing user so that the user executes the first movement according to the first image information;
Step S22 obtains the analysis result that the second image information of period of first movement is executed to user;
Step S23, according to the analysis as a result, synchronous execute and matched second movement of first movement.
Above-mentioned synchronous method, applied in the intelligent robot 11 as described in embodiment one.Specifically, in the course of work In, intelligent robot 11 is invested in operating location (such as the high-risk high operation intensity place such as deep-sea, steep cliff), and user 20 then may be used In the position far from operating location.In step S1, the image or/and view of the environment that intelligent robot 11 is presently in are obtained Frequently, the image or/and video are the first image information, and the first image information is transmitted to synchronizer by intelligent robot 11 12.In another embodiment, the acoustic information of present position can also be obtained in step S21 simultaneously, needs basis to meet Sound carries out the needs of the application scenarios of operation.
User 20 can analyze and determine according to the first image information, to execute the first movement, such as intelligent robot 11 are carrying out deep-sea salvage, target salvage objects occur in current environment, then 20 analog of user executes crawl and is somebody's turn to do The movement (namely first movement) of target salvage objects.
In step S22, the analysis of the second image information is obtained as a result, the second image information is that user executes the first movement The video information of period, it is synchronous based on the analysis results to execute and matched second movement of the first movement in step S23, wherein The detailed process for executing the second movement based on the analysis results, joins described in embodiment one, details are not described herein again.
Further, in step S23, multiple second key points of intelligent robot 11 are first determined, wherein the second key point It is corresponded with the first key point;According to the location information of the first key points all in each frame image and each first key point Motion change information, synchronize the position of corresponding mobile each second key point to execute the second movement so that the second movement with The first movement matching.
Above-mentioned steps can specifically join described in embodiment one, and details are not described herein again.
It should be appreciated that the synchronous method provided in this embodiment applied to bio-identification, may be implemented as in embodiment one All beneficial effects.
Referring to Fig. 5, further, the synchronous method provided in this embodiment applied to bio-identification, in step S23 Before further include:
Step S24 judges the analysis result that the described image processing equipment being currently received is sent and the danger prestored Whether the matching degree of dangerous action message is higher than the matching degree threshold value prestored.
If being judged as NO, S23 is thened follow the steps.
If being judged as YES, S25 is thened follow the steps, refusal executes operation or/and feedback Reason For Denial.
In the present embodiment, intelligent robot 11 is also used to prejudge dangerous play, specifically, intelligent robot 11 prestores danger Dangerous action message and matching degree threshold value.Wherein, the dangerous play information prestored can count to obtain according to many experiments, work as intelligent machine When device people 11 executes certain movement, destruction can be generated to 11 body structures of intelligent robot, then the movement is identified as danger Movement, when intelligent robot 11 executes the dangerous play, the coordinate of each second key point i.e. by as with the dangerous play pair The dangerous play information answered is stored.
To in the analysis result of the second image information, the coordinate including each first key point in each frame image, step In S24, by the coordinate of the second key point in the coordinate of the first key point in each frame image and the dangerous play information of storage into Row compares.One matching degree threshold value is set, such as is set as 97%, when the seat of the first key point in analysis result in each frame image Mark and the coordinate matching degree of the second key point in the dangerous play information of storage are less than or equal to 97%, then follow the steps S23, that is, control Intelligent robot 11 processed is synchronous to be executed and matched second movement of the first movement;When the first pass in analysis result in each frame image When the coordinate matching degree of the second key point is greater than 97% in the coordinate of key point and the dangerous play information of storage, then follow the steps S25, the control refusal of intelligent robot 11 execute operation or/and feedback Reason For Denial.
Alternatively, in another embodiment, referring to Fig. 6, when the seat of the first key point in analysis result in each frame image When mark and the coordinate matching degree of the second key point in the dangerous play information of storage are greater than 97%, i.e. in step S24, it is judged as It is to then follow the steps S26, based on the analysis results with dangerous play information, movement range is adjusted when execution second acts Section.That is, according to the coordinate of the first key point each in the analysis result being currently received and in the dangerous play information that prestores the The difference of the coordinate of two key points adjusts the coordinate of each second key point when executing the second movement, is then presented as the second movement The fine tuning of amplitude.
Because mechanical and human body be configured with many differences, such as the adjusting to center of gravity, human body can be moved according to different The center of gravity of oneself is made adjustments to keep balancing, but intelligent robot 11 can not be accomplished;This results in certain movement postures, people Body can be accomplished, but intelligent robot 11 is when being executed, may fall because of gravity center instability and partially, this problem repeatedly occurs just Easily cause 11 structural failure of intelligent robot.Therefore, the above-mentioned anticipation to dangerous play is conducive to improve this problem, is conducive to 11 structural failure of intelligent robot is avoided, and then is conducive to improve the service life of intelligent robot 11.
Example IV
Referring to Fig. 7, image processing method provided in this embodiment, applied to above-mentioned image processing equipment 13;Image Processing method includes:
Step S31 obtains the second image information that user executes the period of the first movement according to the first image information;
Step S32, second image information is analyzed and generate analysis as a result, by the analysis result export to Intelligent robot, so that the intelligent robot executes and first movement matched second according to the analysis result is synchronous Movement.
In step S31, image processing equipment 13 is used to obtain the second image that user 20 executes the period of the first movement Information, then it is understood that image processing equipment 13 need to be in the position of relative close with user 20 so that the second image is believed The acquisition of breath.In the present embodiment, the second image information is the video information for the period that user 20 executes the first movement, the video Information includes the continuous image of multiframe.
Further, in step S32, image processing equipment 13 can be analyzed the second image information to generate analysis knot Fruit makes a concrete analysis of process are as follows:
It determines multiple first key points of 20 human body of user, extracts in video information all the of human body in each frame image The location information of one key point, by the fortune of the location information of the first key points all in each frame image and each first key point Dynamic change information is exported as analysis result to intelligent robot.Wherein, motion change information is that each first key point exists Change in location information in each frame image of video information.
For example, the first key point chosen includes a1, a2, a3, wherein a1 is elbow joint, and a2 is at shoulder joint, and a3 is At knee joint, certainly, in practical operation, in order to guarantee higher homework precision, the first more key point can determine whether, wherein Determine that the position of the first key point is not construed as limiting.In the present embodiment, joint is preferentially chosen in the selection of the first key point, because of people During exercise, the position change amount of joint is relatively large for body, then the position of joint, which opposite can become apparent from, directly reflects the One movement.In the present embodiment, the location information of each first key point can be indicated with coordinate, due to determining the first movement master The relative coordinate of each first key point is depended on, rather than depends on the absolute coordinate of each key point, therefore to coordinate system Selection be not particularly limited.
Above-mentioned video information includes multiple image, in same frame image, the relative position table of each first key point The first movement of user 20 at this time is levied, and coordinate knots modification of same first key point in the image of different frame (is also ascended the throne Set change information) then reflect the movement tendency of the first movement.Therefore, image processing equipment 13 is exported to intelligent robot 11 Analysis structure includes the location information (coordinate representation) and each first key of all first key points in each frame image Motion change information (changes in coordinates amount expression) of the point in each frame image.
It should be appreciated that image processing method provided in this embodiment, and then realize the beneficial effect as described in embodiment one Fruit.
The above-mentioned method and step of the present invention, can instruct relevant hardware to complete by computer program.With reference to Fig. 8 It is described, in the embodiment of the present invention, intelligent robot 11 include readable storage medium storing program for executing 111, processor 112 and be stored in it is described can Read the computer program 113 run in storage medium 111 and on processor 112.The processor 112 executes the computer When program 113, it can be achieved that the step of the synchronous method of each embodiment of the method for above-mentioned Fig. 4 to Fig. 7.
Illustratively, the computer program 113 can be divided into one or more module/units, it is one or Multiple module/the units of person are stored in readable storage medium storing program for executing 111, and are executed by the processor 112, to complete the present invention. One or more of module/units can be the series of computation machine instruction segment that can complete specific function, described instruction section For describing implementation procedure of the computer program 113 in the intelligent robot 11.
Wherein, the computer program includes computer program code, and the computer program code can be source code Form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium may include: can Carry any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, computer of the computer program code Memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that the computer-readable medium The content for including can carry out increase and decrease appropriate according to the requirement made laws in jurisdiction with patent practice, such as in certain departments Method administrative area does not include electric carrier signal and telecommunication signal according to legislation and patent practice, computer-readable medium.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.System, device or terminal installation right Multiple units, module or the device stated in it is required that can also pass through software or hardware by the same unit, module or device To realize.The first, the second equal words are used to indicate names, and are not indicated any particular order.
Those skilled in the art it should be appreciated that more than embodiment be intended merely to illustrate the present invention, And be not used as limitation of the invention, as long as within spirit of the invention, it is to the above embodiments Appropriate change and variation are all fallen within the scope of protection of present invention.

Claims (10)

1. a kind of synchronous method applied to bio-identification is applied to intelligent robot, which is characterized in that the described method includes:
It obtains the first image information of its current environment and exports the first image information, the first image information For showing user so that the user executes the first movement according to the first image information;
Obtain the analysis result that the second image information of period of first movement is executed to user;
According to the analysis as a result, synchronous execute and matched second movement of first movement.
2. being applied to the synchronous method of bio-identification as described in claim 1, which is characterized in that the second image information packet Multiple image is included, each frame image includes the location information of multiple first key points;It is described to be held according to the analysis as a result, synchronizing It is capable to be specifically included with matched second movement of first movement:
Determine multiple second key points of the intelligent robot, wherein second key point and first key point one One is corresponding;
Believed according to the motion change of the location information of the first key points all in each frame image and each first key point Breath synchronizes the position of corresponding mobile each second key point to execute the second movement so that second movement with it is described First movement matching.
3. being applied to the synchronous method of bio-identification as described in claim 1, which is characterized in that described to be tied according to the analysis Fruit, before synchronous execution is acted with first movement matched second further include:
Judge the analysis result that the described image processing equipment being currently received is sent and the dangerous play information prestored Whether matching degree is higher than the matching degree threshold value prestored.
4. being applied to the synchronous method of bio-identification as claimed in claim 3, which is characterized in that described to be tied according to the analysis Fruit, synchronous the step of executing with matched second movement of the first movement specifically:
If being judged as YES, refuse to execute operation or/and feedback Reason For Denial;
It is synchronous to execute and matched second movement of first movement if being judged as NO.
5. being applied to the synchronous method of bio-identification as claimed in claim 3, which is characterized in that if being judged as YES, according to institute Analysis result and the dangerous play information are stated, movement range is adjusted when executing the described second movement.
6. a kind of image processing method applied to bio-identification is applied to image processing equipment, which is characterized in that described image Processing method includes the following steps:
Obtain the second image information that user executes the period of the first movement according to the first image information;
Second image information is analyzed and generates analysis as a result, exporting the analysis result to intelligent robot, So that the intelligent robot executes and matched second movement of first movement according to the analysis result is synchronous.
7. a kind of man-machine interaction method applied to bio-identification characterized by comprising
Obtain the first image information of current environment;
The first image synchronizing information is showed into user, so that user can execute first according to the first image information Movement;
Obtain the user execute it is described first movement period the second image information, and to second image information into Row analysis is to generate analysis result;
It is executed and matched second movement of first movement according to the analysis result to second image information is synchronous.
8. a kind of man-machine interactive system applied to bio-identification characterized by comprising
Intelligent robot, the intelligent robot are used to obtain the first image information of its current environment and by described first Image information is transmitted to synchronizer;
The synchronizer, for the first image synchronizing information to be showed user, so that user can be according to described One image information executes the first movement;
Image processing equipment, the second image information for executing the period of first movement for obtaining the user, and it is right Second image information is analyzed to generate analysis as a result, by exporting the analysis result of second image information to institute State intelligent robot;
The intelligent robot is matched according to synchronous execute of the analysis result to second image information with first movement Second movement.
9. a kind of computer, which is characterized in that the computer includes processor and memory, and the processor is deposited for executing The method according to claim 1 to 7 step is realized when the computer program stored in reservoir.
10. a kind of computer readable storage medium is stored thereon with computer program or instruction, which is characterized in that the calculating Machine program or instruction realize the method according to claim 1 to 7 step when being executed by processor.
CN201910565031.4A 2019-06-27 2019-06-27 Synchronous method, image processing method, man-machine interaction method and relevant device Pending CN110465937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910565031.4A CN110465937A (en) 2019-06-27 2019-06-27 Synchronous method, image processing method, man-machine interaction method and relevant device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910565031.4A CN110465937A (en) 2019-06-27 2019-06-27 Synchronous method, image processing method, man-machine interaction method and relevant device

Publications (1)

Publication Number Publication Date
CN110465937A true CN110465937A (en) 2019-11-19

Family

ID=68507054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910565031.4A Pending CN110465937A (en) 2019-06-27 2019-06-27 Synchronous method, image processing method, man-machine interaction method and relevant device

Country Status (1)

Country Link
CN (1) CN110465937A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100303289A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Device for identifying and tracking multiple humans over time
CN103398702A (en) * 2013-08-05 2013-11-20 青岛海通机器人系统有限公司 Mobile-robot remote control apparatus and control technology
US20150363637A1 (en) * 2014-06-16 2015-12-17 Lg Electronics Inc. Robot cleaner, apparatus and method for recognizing gesture
CN106625724A (en) * 2016-11-29 2017-05-10 福州大学 Industrial robot body security control method oriented to cloud control platform
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN107765855A (en) * 2017-10-25 2018-03-06 电子科技大学 A kind of method and system based on gesture identification control machine people motion
CN107856039A (en) * 2017-11-16 2018-03-30 北京科技大学 A kind of service robot system and method for accompanying and attending to of supporting parents of accompanying and attending to of supporting parents
CN108839018A (en) * 2018-06-25 2018-11-20 盐城工学院 A kind of robot control operating method and device
CN109227540A (en) * 2018-09-28 2019-01-18 深圳蓝胖子机器人有限公司 A kind of robot control method, robot and computer readable storage medium
CN109829451A (en) * 2019-03-22 2019-05-31 京东方科技集团股份有限公司 Organism action identification method, device, server and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100303289A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Device for identifying and tracking multiple humans over time
CN103398702A (en) * 2013-08-05 2013-11-20 青岛海通机器人系统有限公司 Mobile-robot remote control apparatus and control technology
US20150363637A1 (en) * 2014-06-16 2015-12-17 Lg Electronics Inc. Robot cleaner, apparatus and method for recognizing gesture
CN106625724A (en) * 2016-11-29 2017-05-10 福州大学 Industrial robot body security control method oriented to cloud control platform
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN107765855A (en) * 2017-10-25 2018-03-06 电子科技大学 A kind of method and system based on gesture identification control machine people motion
CN107856039A (en) * 2017-11-16 2018-03-30 北京科技大学 A kind of service robot system and method for accompanying and attending to of supporting parents of accompanying and attending to of supporting parents
CN108839018A (en) * 2018-06-25 2018-11-20 盐城工学院 A kind of robot control operating method and device
CN109227540A (en) * 2018-09-28 2019-01-18 深圳蓝胖子机器人有限公司 A kind of robot control method, robot and computer readable storage medium
CN109829451A (en) * 2019-03-22 2019-05-31 京东方科技集团股份有限公司 Organism action identification method, device, server and storage medium

Similar Documents

Publication Publication Date Title
KR102334942B1 (en) Data processing method and device for caring robot
CN107181818B (en) Robot remote control and management system and method based on cloud platform
CN108983636B (en) Man-machine intelligent symbiotic platform system
CN106997243B (en) Speech scene monitoring method and device based on intelligent robot
CN104598915A (en) Gesture recognition method and gesture recognition device
WO2018000268A1 (en) Method and system for generating robot interaction content, and robot
CN106660209B (en) Study of Intelligent Robot Control system, method and intelligent robot
WO2018000267A1 (en) Method for generating robot interaction content, system, and robot
CN109640224A (en) A kind of sound pick-up method and device
CN115064020A (en) Intelligent teaching method, system and storage medium based on digital twin technology
CN106653020A (en) Multi-business control method and system for smart sound and video equipment based on deep learning
KR20210033809A (en) Control server and method for controlling robot using artificial neural network, and the robot implementing the same
CN102819751A (en) Man-machine interaction method and device based on action recognition
CN111383642A (en) Voice response method based on neural network, storage medium and terminal equipment
CN110465937A (en) Synchronous method, image processing method, man-machine interaction method and relevant device
CN111399647A (en) Artificial intelligence self-adaptation interactive teaching system
CN109116987B (en) Holographic display system based on Kinect gesture control
CN117075726A (en) Cooperative control method and system for mixed interaction of visual gestures and myoelectricity sensing
US20220113932A1 (en) Information processing device controlling sound according to simultaneous actions of two users
CN106997449A (en) Robot and face identification method with face identification functions
CN110838357A (en) Attention holographic intelligent training system based on face recognition and dynamic capture
Huang et al. Intent-aware interactive Internet of Things for enhanced collaborative ambient intelligence
CN105204025A (en) Position indexing-based selection system and method
CN205668270U (en) Interactive learning formula robot and robot cluster
CA3152899A1 (en) Method and system for recognizing user intent and updating a graphical user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191119