CN114758399A - Expression control method, device, equipment and storage medium of bionic robot - Google Patents

Expression control method, device, equipment and storage medium of bionic robot Download PDF

Info

Publication number
CN114758399A
CN114758399A CN202210462864.XA CN202210462864A CN114758399A CN 114758399 A CN114758399 A CN 114758399A CN 202210462864 A CN202210462864 A CN 202210462864A CN 114758399 A CN114758399 A CN 114758399A
Authority
CN
China
Prior art keywords
steering engine
face key
expression
key points
facial expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210462864.XA
Other languages
Chinese (zh)
Inventor
刘娜
袁野
赵帅康
刘岩岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Zhongyuan Power Intelligent Manufacturing Co ltd
Original Assignee
Henan Zhongyuan Power Intelligent Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Zhongyuan Power Intelligent Manufacturing Co ltd filed Critical Henan Zhongyuan Power Intelligent Manufacturing Co ltd
Priority to CN202210462864.XA priority Critical patent/CN114758399A/en
Publication of CN114758399A publication Critical patent/CN114758399A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an expression control method, device, equipment and storage medium of a bionic robot, which are characterized in that an acquired facial expression image is input into a pre-trained neural network model for feature extraction, and a feature image is output; matching the first face key points extracted from the characteristic image with preset face key points, calculating the offset between the first face key points and the preset face key points, and performing facial expression recognition; and matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression. Compared with the prior art, the technical scheme of the invention carries out facial expression recognition by calculating the offset between the first facial key point of the characteristic image and the preset facial key point, improves the accuracy of facial recognition, and simultaneously controls the steering engine to move according to the recognized facial expression, thereby improving the generation speed of the bionic robot on the facial expression.

Description

Expression control method, device, equipment and storage medium of bionic robot
Technical Field
The invention relates to the technical field of intelligent bionic robots, in particular to a method, a device, equipment and a storage medium for controlling expressions of a bionic robot.
Background
With the increasing trend of aging population and the increasing maturity and perfection of industrial robot technology, the requirements for robots are also improved from simple and repeated mechanical actions to develop a bionic robot with high intelligence, autonomy and interaction with other intelligent bodies. Unlike conventional robots, people expect that biomimetic robots have real emotions and can communicate with humans naturally. Since most emotional information is transmitted by facial expressions when humans communicate with each other, people also want the biomimetic robot to express their own emotions by controlling the expressions.
However, in the prior art, for the recognition of the facial micro-expressions, local features of the face are extracted based on a designed local regular region, or global features of the face are directly used for recognizing the facial micro-expressions, and the situations that strong correlation or negative correlation is generated between the facial micro-expressions due to muscle movement are not considered, and the situation that an activation region of each facial micro-expression is an irregular region and may be a discontinuous region is not considered, so that the accuracy of the recognized facial expressions is reduced. In addition, when the existing face model is deployed on hardware, due to the limitation of hardware computing power, the problems of instantaneity and the like exist, the recognition speed of the face expression is easy to reduce, and the speed of the bionic robot for generating the face expression is further reduced.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the expression control method, the device, the equipment and the storage medium of the bionic robot are provided, so that the recognition precision of the human face expression is improved, and meanwhile, the generation speed of the human face expression of the bionic robot is improved.
In order to solve the technical problem, the invention provides an expression control method of a bionic robot, which comprises the following steps:
acquiring a facial expression image, inputting the facial expression image into a pre-trained neural network model so as to enable the neural network model to perform feature extraction on the facial expression image and output a feature image;
extracting first face key points of the characteristic image, matching the first face key points with preset face key points, calculating the offset between the first face key points and the preset face key points, and recognizing facial expressions according to the offset;
and matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression.
In a possible implementation manner, the matching of the corresponding steering engine motion trajectory according to the recognized facial expression specifically includes:
traversing a steering engine motion lookup table, and acquiring a motion direction and a motion angle of a steering engine corresponding to the facial expression according to the identified facial expression;
the steering engine motion lookup table comprises motion directions and motion angles of steering engines corresponding to different types of human face expressions.
In a possible implementation manner, matching the first face key point with a preset face key point, calculating an offset between the first face key point and the preset face key point, and performing facial expression recognition according to the offset specifically includes:
matching the first face key points with preset face key points, wherein the preset face key points comprise preset face key points of different expression categories;
and respectively calculating the offsets between the first face key point and the preset face key points of different expression categories, acquiring the minimum value of all calculated offsets, and performing face recognition on the preset face key point corresponding to the minimum value.
In a possible implementation manner, the steering engine of the bionic robot comprises a first eyebrow steering engine, a second eyebrow steering engine, a first left eye steering engine, a second left eye steering engine, a first right eye steering engine, a second right eye steering engine, a first mouth angle steering engine, a second mouth angle steering engine, a first mouth steering engine, a first neck steering engine, a second neck steering engine and a third neck steering engine.
The embodiment of the invention also provides an expression control device of the bionic robot, which comprises: the system comprises a feature extraction module, an expression recognition module and a steering engine control module;
the feature extraction module is used for acquiring a facial expression image, inputting the facial expression image into a pre-trained neural network model, so that the neural network model performs feature extraction on the facial expression image and outputs a feature image;
the expression recognition module is used for extracting first face key points of the characteristic image, matching the first face key points with preset face key points, calculating the offset between the first face key points and the preset face key points, and recognizing facial expressions according to the offset;
and the steering engine control module is used for matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression.
In a possible implementation manner, the steering engine control module is configured to match a corresponding steering engine motion trajectory according to the identified facial expression, and specifically includes:
traversing a steering engine motion lookup table, and acquiring a motion direction and a motion angle of a steering engine corresponding to the facial expression according to the identified facial expression;
the steering engine motion lookup table comprises motion directions and motion angles of steering engines corresponding to different types of human face expressions.
In a possible implementation manner, the expression recognition module is configured to match the first face key point with a preset face key point, calculate an offset between the first face key point and the preset face key point, and perform facial expression recognition according to the offset, and specifically includes:
matching the first face key points with preset face key points, wherein the preset face key points comprise preset face key points of different expression categories;
and respectively calculating the offsets between the first face key point and the preset face key points of different expression categories, acquiring the minimum value of all calculated offsets, and performing face recognition on the preset face key point corresponding to the minimum value.
In a possible implementation manner, the steering engine of the bionic robot in the steering engine control module comprises a first eyebrow steering engine, a second eyebrow steering engine, a first left eye steering engine, a second left eye steering engine, a first right eye steering engine, a second right eye steering engine, a first mouth angle steering engine, a second mouth angle steering engine, a first mouth steering engine, a first neck steering engine, a second neck steering engine and a third neck steering engine.
An embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements the expression control method of the biomimetic robot when executing the computer program.
The embodiment of the invention also provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, the device where the computer-readable storage medium is located is controlled to execute the expression control method of the biomimetic robot.
Compared with the prior art, the expression control method, the expression control device, the expression control equipment and the storage medium of the bionic robot have the following beneficial effects that:
the method comprises the steps of extracting features of a facial expression image through a pre-trained neural network model, outputting the feature image, and carrying out facial expression recognition based on the offset between a first facial key point of the calculated feature image and a preset facial key point, wherein the method is different from the prior art in that the global features of a face are directly used for recognizing facial micro-expressions, so that the problem of reduced accuracy of recognized facial expressions is caused; meanwhile, facial expression recognition is carried out based on the extracted first facial key points, more accurate micro expression recognition can be completed, and the accuracy of facial expression recognition is improved; meanwhile, after the facial expression is recognized, the steering engine of the bionic robot is controlled to move based on the corresponding steering engine motion track matched with the steering engine of the bionic robot, and the generation speed of the facial expression of the bionic robot is improved.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of an expression control method of a bionic robot according to the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of an expression control apparatus of a biomimetic robot according to the present invention;
fig. 3 is a schematic distribution diagram of a steering engine of the biomimetic robot according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of an expression control method of a biomimetic robot provided by the present invention, as shown in fig. 1, the method includes steps 101 to 103, specifically as follows:
step 101: the method comprises the steps of obtaining a facial expression image, inputting the facial expression image into a pre-trained neural network model, enabling the neural network model to extract features of the facial expression image, and outputting a feature image.
In one embodiment, a training image data set of a human face expression is obtained, wherein the training image data set comprises expression data such as frowning, mouth opening, eye closing, eye opening, smiling and the like; dividing a training image data set into a training set, a test set and a verification set according to a preset proportion, and training the neural network model by using the acquired training image data set of the facial expression based on the constructed neural network model; wherein the constructed neural network model is a U-Net network.
In one embodiment, model training is performed on a constructed neural network model, firstly, initialization processing is performed on model parameters, including initialization setting is performed on a classifier network, a multi-feature extraction network, a network parameter w and a network parameter b, after the initialization setting, a training image data set is input into the neural network model to perform face recognition training, the network parameter w and the network parameter b of the neural network model are calculated based on a gradient descent method, a loss function with a boundary weight in the neural network model is designed, the network weight is updated by using a loss function value, and the constructed neural network model is repeatedly trained until the loss function value of the neural network model gradually becomes stable and is not descended any more, so that pre-training of the constructed neural network model is completed.
In one embodiment, based on the prior art, when the facial expression recognition model is deployed on hardware, the problem of real-time performance exists due to the limitation of hardware computing power, and the like, in the embodiment, after a neural network model for face recognition is constructed, the neural network model is compressed through pruning, unimportant layers and parameters are removed, and the neural network model is lightened as much as possible; and then, a high-performance deep learning reasoning optimizer TensorRT is used for reasoning and accelerating the neural network model, wherein the high-performance deep learning reasoning optimizer TensorRT can greatly reduce the parameter operation amount in the neural network model by converting floating point type operation into integer type operation, so that the recognition speed of the neural network model is greatly improved, and an optimal model is provided for the bionic robot and the hardware deployment cost is reduced by introducing methods such as model compression, model acceleration and the like.
In one embodiment, a camera is arranged in the bionic robot, the facial expression image is acquired in real time by controlling the opening of the camera in the bionic robot, and the facial expression image is preprocessed, wherein the preprocessing of the facial expression image comprises scaling of the facial expression image, so that the image size is adjusted to meet the input requirement of a pre-trained neural network model. In this embodiment, the image size after scaling adjustment is 640 × 640, and the camera is embedded in the head region of the biomimetic robot.
In one embodiment, the preprocessed facial expression images are input into a pre-trained neural network model, feature images with different scales are obtained through feature extraction, multi-scale fusion is carried out on the feature images with different scales, and feature images after feature fusion are output.
Step 102: extracting first face key points of the characteristic image, matching the first face key points with preset face key points, calculating the offset between the first face key points and the preset face key points, and recognizing the facial expression according to the offset.
In one embodiment, the preset face key points include 108 key points of the face defined by the U-Net network, wherein the 108 key points are distributed at different positions of the face.
In one embodiment, judging whether the acquired feature image is a feature image of the front of a human face, if so, performing local feature detection on the acquired feature image through a constructed neural network model for extracting a first human face key point of the feature image, and matching the first human face key point extracted into the feature image with preset human face key points, wherein the preset human face key points comprise preset human face key points of different expression categories; respectively calculating the offsets between the first face key point and preset face key points of different expression categories; and acquiring the minimum value of all the calculated offset values, and carrying out face recognition on the preset face key point corresponding to the minimum value.
In one embodiment, if the extracted feature image is not a feature image of the front face of a human face, local feature detection is performed on the acquired feature image through a constructed neural network model for extracting first face key points of the feature image, then correction processing is performed on the extracted first face key points, specifically, a group of matrix for alignment correction is obtained by obtaining all face key point information of a standard human face library and taking an average value, first face key points after alignment correction are obtained through matrix transformation, the first face key points after alignment correction are matched with preset face key points, and offsets between the first face key points after alignment correction and the preset face key points of different expression categories are respectively calculated; and acquiring the minimum value of all the calculated offset values, and carrying out face recognition on the preset face key point corresponding to the minimum value.
In the embodiment, the facial expression recognition is performed based on the constructed neural network model for the facial recognition, the traditional methods such as a sensor and image processing are not depended on, and the dependence on a hardware system is greatly reduced by using a deep learning method.
Step 103: and matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression.
In one embodiment, the steering gear of the bionic robot is installed in the head area of the bionic robot, and the steering gear of the bionic robot comprises a first eyebrow steering gear 1, a second eyebrow steering gear 2, a first left eye steering gear 3, a second left eye steering gear 4, a first right eye steering gear 5, a second right eye steering gear 6, a first mouth angle steering gear 7, a second mouth angle steering gear 8, a first mouth steering gear 9, a first neck steering gear 10, a second neck steering gear 11 and a third neck steering gear 12, wherein the schematic distribution diagram of the steering gear of the bionic robot is shown in fig. 3.
In one embodiment, a steering engine motion lookup table is generated for the motion directions and the motion angles of the steering engines correspondingly arranged for different types of facial expressions. Specifically, coordinate positions of all steering engines of the bionic robot are obtained, the coordinate positions of all the steering engines of the bionic robot are converted to coordinates of face images in a training image data set, the coordinate positions of all the steering engines in the face images of different types are obtained, based on the moving coordinate positions of all the steering engines, all the steering engines are enabled to coincide with corresponding positions in the face images of different types respectively, based on the moving track, the moving direction and the moving angle of all the steering engines corresponding to different types of face expression settings are obtained, and a steering engine movement query table is generated.
As an example in this embodiment, when the category of the facial expression is smiling, the first mouth angle steering engine 7 and the second mouth angle steering engine 8 are set to rotate upward by a preset angle, and meanwhile, the other steering engines keep the current direction and angle unchanged.
In one embodiment, the motion direction and the motion angle of the bionic robot steering engine corresponding to the facial expression are obtained by traversing the steering engine motion query table according to the recognized facial expression, and the steering engine of the bionic robot is controlled to move according to the motion direction and the motion angle, so that the bionic robot simulates the recognized facial expression to generate the bionic robot expression.
In one embodiment, the constructed neural network model for face recognition is deployed in edge equipment, and the edge equipment is embedded into the bionic robot, so that the bionic robot can directly acquire the recognized face expression result, signals are sent out timely, a steering engine is controlled to move, and the generation speed of the bionic robot expression is improved.
Example 2
Referring to fig. 2, fig. 2 is a schematic structural diagram of an embodiment of an expression control apparatus of a biomimetic robot provided in the present invention, as shown in fig. 2, the apparatus includes: the feature extraction module 201, the expression recognition module 202 and the steering engine control module 203 are specifically as follows:
the feature extraction module 201 is configured to obtain a facial expression image, and input the facial expression image into a pre-trained neural network model, so that the neural network model performs feature extraction on the facial expression image, and outputs a feature image.
The expression recognition module 202 is configured to extract a first face key point of the feature image, match the first face key point with a preset face key point, calculate an offset between the first face key point and the preset face key point, and perform facial expression recognition according to the offset.
And the steering engine control module 203 is used for matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression.
In one embodiment, the steering engine control module 203 is configured to match a corresponding steering engine motion trajectory according to the identified facial expression, and specifically includes: traversing a steering engine motion query table, and acquiring the motion direction and the motion angle of a steering engine corresponding to the facial expression according to the recognized facial expression; the steering engine motion lookup table comprises motion directions and motion angles of steering engines corresponding to different types of human face expressions.
In an embodiment, the expression recognition module 202 is configured to match the first face key point with a preset face key point, calculate an offset between the first face key point and the preset face key point, and perform facial expression recognition according to the offset, and specifically includes:
matching the first face key points with preset face key points, wherein the preset face key points comprise preset face key points of different expression categories;
and respectively calculating the offsets between the first face key point and the preset face key points of different expression categories, acquiring the minimum value of all calculated offsets, and performing face recognition on the preset face key point corresponding to the minimum value.
In an embodiment, the steering engines of the bionic robot in the steering engine control module 203 include a first eyebrow steering engine, a second eyebrow steering engine, a first left eye steering engine, a second left eye steering engine, a first right eye steering engine, a second right eye steering engine, a first mouth corner steering engine, a second mouth corner steering engine, a first mouth steering engine, a first neck steering engine, a second neck steering engine and a third neck steering engine.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein.
It should be noted that the above embodiments of the expression control apparatus of the bionic robot are merely schematic, where the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical units, that is, may be located in one place, or may also be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
On the basis of the above embodiment of the expression control method of the biomimetic robot, another embodiment of the present invention provides an expression control terminal device of the biomimetic robot, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the expression control method of the biomimetic robot according to any one embodiment of the present invention is implemented.
Illustratively, the computer program may be partitioned in this embodiment into one or more modules that are stored in the memory and executed by the processor to implement the invention. The one or more modules can be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used for describing the execution process of the computer program in the expression control terminal device of the bionic robot.
The expression control terminal device of the bionic robot can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The expression control terminal device of the bionic robot can comprise, but is not limited to, a processor and a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processor can be a microprocessor or the processor can also be any conventional processor and the like, the processor is a control center of the expression control terminal equipment of the bionic robot, and various interfaces and lines are utilized to connect all parts of the expression control terminal equipment of the whole bionic robot.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the expression control terminal device of the bionic robot by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
On the basis of the above embodiment of the expression control method of the biomimetic robot, another embodiment of the present invention provides a storage medium, where the storage medium includes a stored computer program, and when the computer program runs, the storage medium controls a device on which the storage medium is located to execute the expression control method of the biomimetic robot according to any embodiment of the present invention.
In this embodiment, the storage medium is a computer-readable storage medium, and the computer program includes computer program code, which may be in source code form, object code form, executable file or some intermediate form, and so on. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
In summary, the expression control method, the expression control device, the expression control equipment and the storage medium of the bionic robot provided by the invention output the feature image by inputting the acquired facial expression image into the pre-trained neural network model for feature extraction; matching the first face key points extracted from the characteristic image with preset face key points, calculating the offset between the first face key points and the preset face key points, and performing facial expression recognition; and matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression. Compared with the prior art, the technical scheme of the invention has the advantages that the offset between the first human face key point and the preset human face key point of the characteristic image is calculated to identify the human face expression, so that the accuracy of the human face identification is improved, and meanwhile, the steering engine is controlled to move according to the identified human face expression, so that the generation speed of the bionic robot on the human face expression is improved.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and substitutions can be made without departing from the technical principle of the present invention, and these modifications and substitutions should also be regarded as the protection scope of the present invention.

Claims (10)

1. An expression control method of a bionic robot is characterized by comprising the following steps:
acquiring a facial expression image, inputting the facial expression image into a pre-trained neural network model so as to enable the neural network model to extract the characteristics of the facial expression image and output a characteristic image;
extracting first face key points of the feature images, matching the first face key points with preset face key points, calculating the offset between the first face key points and the preset face key points, and recognizing facial expressions according to the offset;
and matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression.
2. The expression control method of the bionic robot as claimed in claim 1, wherein the matching of the corresponding steering engine motion trajectory according to the recognized facial expression specifically comprises:
traversing a steering engine motion lookup table, and acquiring a motion direction and a motion angle of a steering engine corresponding to the facial expression according to the identified facial expression;
the steering engine motion lookup table comprises motion directions and motion angles of steering engines corresponding to different types of human face expressions.
3. The method as claimed in claim 1, wherein the matching of the first face key point with a preset face key point, calculating an offset between the first face key point and the preset face key point, and performing facial expression recognition according to the offset specifically comprises:
matching the first face key points with preset face key points, wherein the preset face key points comprise preset face key points of different expression categories;
and respectively calculating the offsets between the first face key point and the preset face key points of different expression categories, acquiring the minimum value of all calculated offsets, and performing face recognition on the preset face key point corresponding to the minimum value.
4. The expression control method of the bionic robot as claimed in claim 2, wherein the steering engines of the bionic robot comprise a first eyebrow steering engine, a second eyebrow steering engine, a first left eye steering engine, a second left eye steering engine, a first right eye steering engine, a second right eye steering engine, a first mouth angle steering engine, a second mouth angle steering engine, a first mouth steering engine, a first neck steering engine, a second neck steering engine and a third neck steering engine.
5. An expression control device of a bionic robot is characterized by comprising: the system comprises a feature extraction module, an expression recognition module and a steering engine control module;
the feature extraction module is used for acquiring a facial expression image, inputting the facial expression image into a pre-trained neural network model, so that the neural network model performs feature extraction on the facial expression image and outputs a feature image;
the expression recognition module is used for extracting first face key points of the characteristic image, matching the first face key points with preset face key points, calculating the offset between the first face key points and the preset face key points, and recognizing facial expressions according to the offset;
and the steering engine control module is used for matching the corresponding steering engine motion trail according to the recognized facial expression, and controlling a steering engine of the bionic robot to move according to the steering engine motion trail to generate the bionic robot expression.
6. The expression control device of the bionic robot as claimed in claim 5, wherein the steering engine control module is configured to match a corresponding steering engine motion trajectory according to the recognized facial expression, and specifically comprises:
traversing a steering engine motion query table, and acquiring the motion direction and the motion angle of a steering engine corresponding to the facial expression according to the recognized facial expression;
the steering engine motion lookup table comprises motion directions and motion angles of steering engines corresponding to different types of human face expressions.
7. The apparatus as claimed in claim 5, wherein the expression recognition module is configured to match the first face key point with a preset face key point, calculate an offset between the first face key point and the preset face key point, and perform facial expression recognition according to the offset, and specifically includes:
matching the first face key points with preset face key points, wherein the preset face key points comprise preset face key points of different expression categories;
and respectively calculating the offsets between the first face key point and the preset face key points of different expression categories, acquiring the minimum value of all calculated offsets, and performing face recognition on the preset face key point corresponding to the minimum value.
8. The expression control device of the bionic robot as claimed in claim 6, wherein the steering engine of the bionic robot in the steering engine control module comprises a first eyebrow steering engine, a second eyebrow steering engine, a first left eye steering engine, a second left eye steering engine, a first right eye steering engine, a second right eye steering engine, a first mouth angle steering engine, a second mouth angle steering engine, a first mouth steering engine, a first neck steering engine, a second neck steering engine and a third neck steering engine.
9. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the expression control method of the biomimetic robot according to any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program runs, the computer-readable storage medium controls an apparatus to execute the method according to any one of claims 1 to 4.
CN202210462864.XA 2022-04-28 2022-04-28 Expression control method, device, equipment and storage medium of bionic robot Pending CN114758399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210462864.XA CN114758399A (en) 2022-04-28 2022-04-28 Expression control method, device, equipment and storage medium of bionic robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210462864.XA CN114758399A (en) 2022-04-28 2022-04-28 Expression control method, device, equipment and storage medium of bionic robot

Publications (1)

Publication Number Publication Date
CN114758399A true CN114758399A (en) 2022-07-15

Family

ID=82334066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210462864.XA Pending CN114758399A (en) 2022-04-28 2022-04-28 Expression control method, device, equipment and storage medium of bionic robot

Country Status (1)

Country Link
CN (1) CN114758399A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN115802101A (en) * 2022-11-25 2023-03-14 深圳创维-Rgb电子有限公司 Short video generation method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN115802101A (en) * 2022-11-25 2023-03-14 深圳创维-Rgb电子有限公司 Short video generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210174072A1 (en) Microexpression-based image recognition method and apparatus, and related device
Mittal et al. A modified LSTM model for continuous sign language recognition using leap motion
Rao et al. Deep convolutional neural networks for sign language recognition
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
US20210271862A1 (en) Expression recognition method and related apparatus
KR101558202B1 (en) Apparatus and method for generating animation using avatar
Zheng et al. Recent advances of deep learning for sign language recognition
CN109685713B (en) Cosmetic simulation control method, device, computer equipment and storage medium
CN114758399A (en) Expression control method, device, equipment and storage medium of bionic robot
CN112232116A (en) Facial expression recognition method and device and storage medium
TW201937344A (en) Smart robot and man-machine interaction method
CN111768438B (en) Image processing method, device, equipment and computer readable storage medium
US10846568B2 (en) Deep learning-based automatic gesture recognition method and system
CN111028216A (en) Image scoring method and device, storage medium and electronic equipment
Paul et al. Rethinking generalization in american sign language prediction for edge devices with extremely low memory footprint
CN112116684A (en) Image processing method, device, equipment and computer readable storage medium
JP2019008571A (en) Object recognition device, object recognition method, program, and trained model
WO2021190433A1 (en) Method and device for updating object recognition model
Cai et al. Visual focus of attention estimation using eye center localization
Ali et al. Object recognition for dental instruments using SSD-MobileNet
CN111108508A (en) Facial emotion recognition method, intelligent device and computer-readable storage medium
CN110610131B (en) Face movement unit detection method and device, electronic equipment and storage medium
CN116994021A (en) Image detection method, device, computer readable medium and electronic equipment
KR20190115509A (en) Automatic Sign Language Recognition Method and System
CN117173677A (en) Gesture recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination