WO2019100932A1 - 一种运动控制方法及其设备、存储介质、终端 - Google Patents

一种运动控制方法及其设备、存储介质、终端 Download PDF

Info

Publication number
WO2019100932A1
WO2019100932A1 PCT/CN2018/114008 CN2018114008W WO2019100932A1 WO 2019100932 A1 WO2019100932 A1 WO 2019100932A1 CN 2018114008 W CN2018114008 W CN 2018114008W WO 2019100932 A1 WO2019100932 A1 WO 2019100932A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
frame image
data
target object
feature point
Prior art date
Application number
PCT/CN2018/114008
Other languages
English (en)
French (fr)
Inventor
陈欢智
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019100932A1 publication Critical patent/WO2019100932A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a motion control method and device thereof, a storage medium, and a terminal.
  • the virtual interactive application in the terminal device is implemented by acquiring various types of sensors worn by real people, and then converting the human body motion into motion control command to control the motion of the virtual character in the device, and displaying through the terminal display screen.
  • Interactive display is implemented by acquiring various types of sensors worn by real people, and then converting the human body motion into motion control command to control the motion of the virtual character in the device, and displaying through the terminal display screen.
  • the embodiment of the present application provides a motion control method, a device, a storage medium, and a terminal.
  • the embodiment of the present application provides a motion control method, which may include:
  • the embodiment of the present application further provides a motion control device, which may include:
  • a calibration data acquisition unit configured to acquire feature calibration data of the facial feature point based on a feature position of the facial feature point of the target object in the original frame image
  • an update data acquiring unit configured to: when the feature position of the facial feature point in the current frame image changes with respect to a feature position in the original frame image, based on a feature of the facial feature point in the current frame image Position acquiring feature calibration update data of the facial feature points;
  • a motion control unit configured to generate motion control information according to the feature calibration data and the feature calibration update data, and control the virtual object in the holographic projection to perform motion by using the motion control information.
  • the embodiment of the present application further provides a computer storage medium storing a plurality of instructions, the instructions being adapted to be loaded by a processor and performing the following steps:
  • the embodiment of the present application further provides a terminal device, which may include: a processor and a memory; wherein the memory stores a computer program, and the computer program is adapted to be loaded by the processor and perform the following steps:
  • the feature calibration data of the facial feature point is acquired based on the feature position of the facial feature point of the target object in the original frame image, and the feature position of the facial feature point in the current frame image is compared with the feature in the original frame image.
  • the feature calibration update data of the facial feature point is acquired based on the feature position of the facial feature point in the current frame image, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the holographic projection is controlled by the action control information.
  • the virtual object moves.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed. Imitation, reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increasing the authenticity of the interaction.
  • FIG. 1 is a schematic flow chart of a motion control method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an operation control structure provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of obtaining a rotation direction according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a virtual object motion simulation effect provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart diagram of another motion control method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flow chart of another motion control method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a motion control device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another motion control device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a motion control unit according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another motion control unit according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the motion control method provided by the embodiment of the present application can be applied to a scene in which a virtual character imitates a realistic character motion in a holographic projection.
  • the motion control device acquires the facial feature point based on a feature position of a facial feature point of the target object in the original frame image.
  • Feature calibration data when the feature position of the facial feature point in the current frame image is changed relative to the feature position in the original frame image, based on the feature position of the facial feature point in the current frame image
  • the feature of the facial feature point is calibrated to update data, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the virtual object in the holographic projection is controlled to be controlled by the motion control information.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed. Imitation, reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increasing the authenticity of the interaction.
  • the motion control device may be a smart box with portability of a holographic projection function
  • the holographic projection may be a technique of recording and reproducing a true three-dimensional image of an object using interference and diffraction principles.
  • FIG. 1 is a schematic flow chart of a motion control method according to an embodiment of the present application. As shown in FIG. 1 , the method in this embodiment of the present application may include the following steps S101 to S103.
  • the implementation structure of the motion control may be as shown in FIG. 2, and the processing chip is used as a core part of the processing module, and is respectively connected to the voice recognition module, the sound card, the face recognition module, and the graphics card, and the input of the processing module is Including resources, face images, and voice data, the processing module processes the face image and the voice data and can output to the speaker and project a holographic projection onto the holographic film through the laser head.
  • the voice recognition module and the face recognition module can respectively identify the input face image and voice data by using the stored voice recognition algorithm and the image recognition algorithm, and the card can identify the image after the image is recognized.
  • the obtained display information is processed and output to the laser head for projection, and the sound card can process the sound information obtained after the voice recognition and output it to the speaker.
  • the resource may be an audio or picture resource or the like stored in the motion control device.
  • the motion control device may use a camera to acquire a facial image of a target object in reality, and then recognize a facial feature point of the facial image (eg, facial features) based on an image recognition algorithm provided by itself, and calibrate the facial feature point. That is, the position coordinates of the facial feature points in the face image are calibrated.
  • the target object may be a real object, such as a character or an animal.
  • the motion control device may acquire feature calibration data of the facial feature points based on feature positions of the facial feature points of the target object in the original frame image.
  • the facial feature points may be facial facial features.
  • the original frame image may be a facial image of the target object initially acquired by the motion control device.
  • the feature position of the facial feature point in the original frame image may be a coordinate point of the representative or position invariant selected from the facial feature point in the original frame image, for example, an eye corner in the eye
  • the coordinate point or the coordinate point of the outer corner of the eye or the nose point or the like is the coordinate point in the original frame image.
  • the feature calibration data may be coordinate data of a coordinate point where the feature position is located, for example, a coordinate origin of the left image of the face image, and feature calibration data of the left eye (ie, a coordinate of the eye corner of the left eye in the original frame image) It can be (3, 5).
  • the camera of the motion control device can capture the facial image of the target object at any time, and can use the currently captured facial image as the current frame image.
  • the motion control device can adopt a camera pair.
  • the target object performs real-time recording, and acquires a frame of the face image of the target object at a preset time interval during the recording process, and uses the face image as the current frame image of the current processing process; or the motion control device may adopt The camera acquires the facial image once every preset time interval, and takes the facial image acquired by the interval as the current frame image of the current processing.
  • the motion control device may perform image recognition on the current frame image, calibrate the facial feature points in the image, and acquire a feature position of the feature point in the current frame image.
  • the feature position of the facial feature point in the current frame image may be a coordinate point of the representative or position invariant selected in the facial feature point in the current frame image, for example, A coordinate point of the inner corner of the eye or a coordinate point of the outer corner of the eye or a coordinate point of the nose tip or the like in the current frame image.
  • the motion control device may detect a feature position of a facial feature point of the target object in a current frame image and the target object by matching the original frame image and the current frame image. Whether the feature position of the facial feature point matches in the original frame image (for example, two frame images are overlapped to determine whether the same type of facial feature points coincide), thereby determining the feature position of the facial feature point in the current frame image relative to Whether the feature position in the original frame image changes.
  • the movement of the head of the target object or the change of the facial expression may cause a change in the feature position of the facial feature point, wherein the change of the feature position may be one or more The feature position of the facial feature points changes.
  • the motion control device may acquire feature calibration update data of the facial feature point based on a feature position of the facial feature point in a current frame image
  • the feature calibration update data may be a facial image of the target object
  • the coordinate data of the facial feature point in the current frame image for example, the feature calibration update data of the left eye (ie, the coordinates of the eye corner in the left eye in the current frame image) may be (5, 5) .
  • the motion control device may generate motion control information according to the feature calibration data and the feature calibration update data. It may be understood that the motion control information may be to control a virtual object in a holographic projection to perform motion.
  • the information may include an action amplitude or direction of the virtual object, etc., for example, "turning to the right 30o", “smile", “nodding", and the like.
  • the motion control device may control the virtual object in the holographic projection to perform motion by using the motion control information.
  • the virtual object may be a virtual object in the device resource stored in the motion control device, or may be a virtual object corresponding to the target object generated by the motion control device, and optimized in device resources. In this case, the virtual object will present a richer image. For example, when the device resource is a 3D resource, the 3D image can be presented in the motion control device.
  • the action of the virtual object in the holographic projection may be consistent with the action of the target object or may be in a mirror image relationship with the action of the target object.
  • the feature calibration data may include at least one coordinate data, for example, coordinate data of an eye corner in the left eye, coordinate data of an eye corner in the right eye, or coordinate data of a nose tip, etc., when the feature calibration data is selected.
  • the coordinate length D1 between the two coordinate data may be acquired, and similarly, the updated coordinate length D2 may be obtained according to the two coordinate data selected in the feature calibration update data.
  • the two coordinate data selected in the feature calibration data and the two coordinate data selected in the feature calibration update data are the same type of coordinate data, for example, coordinate data of the inner corner of the left eye and Coordinate data of the inner corner of the eye in the right eye.
  • the coordinate length D1 may be an eye interval in the original frame image (which may be a distance from the inner eye corner of the left eye to the eye contact in the right eye), and the coordinate length D2 may be the eye distance in the current frame image.
  • the motion control device can calculate the angle ⁇ of the target object's face rotation by using D1 and D2. For example, the direction of the target object's face rotation can be determined according to the coordinate direction between the feature calibration data and the feature calibration update data, for example, the feature calibration data is left.
  • the intraocular coordinate data of the eye is (3, 5)
  • the coordinate data of the eye angle update in the left eye of the feature calibration update data is (5, 5)
  • the rotation direction of the head of the target object may be the coordinate point (3, 5) to the coordinate point ( 5, 5)
  • Direction of indication the direction of rotation shown in Figure 3 is to the right.
  • the motion control device may generate motion control information including the above-described rotation angle and direction (for example, "rotate the head to the right at an angle ⁇ "), and control the virtual object to implement the rotor motion as shown in FIG.
  • the target object may simultaneously output a voice
  • the motion control device may identify voice data emitted by the target object, control the virtual object according to the data content indicated by the voice data Output.
  • the feature calibration data of the facial feature point is acquired based on the feature position of the facial feature point of the target object in the original frame image, and the feature position of the facial feature point in the current frame image is compared with the feature in the original frame image.
  • the feature calibration update data of the facial feature point is acquired based on the feature position of the facial feature point in the current frame image, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the holographic projection is controlled by the action control information.
  • the virtual object moves.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed. Imitation, reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increasing the authenticity of the interaction.
  • FIG. 5 a schematic flowchart of another motion control method according to an embodiment of the present application is provided. As shown in FIG. 5, the method in this embodiment of the present application may include the following steps S201 to S208.
  • S201 Acquire an original frame image of the target object, calibrate a position of the facial feature point of the target object in the original frame image, and obtain a feature position of the facial feature point in the original frame image after calibration ;
  • the implementation structure of the motion control may be as shown in FIG. 2, and the processing chip is used as a core part of the processing module, and is respectively connected to the voice recognition module, the sound card, the face recognition module, and the graphics card, and the input of the processing module is Including resources, face images, and voice data, the processing module processes the face image and the voice data and can output to the speaker and project a holographic projection onto the holographic film through the laser head.
  • the voice recognition module and the face recognition module can respectively identify the input face image and voice data by using the stored voice recognition algorithm and the image recognition algorithm, and the card can identify the image after the image is recognized.
  • the obtained display information is processed and output to the laser head for projection, and the sound card can process the sound information obtained after the voice recognition and output it to the speaker.
  • the resource may be an audio or picture resource or the like stored in the motion control device.
  • the motion control device may use a camera to acquire an original frame image of the target object in reality, that is, a face image of the target object, and may identify a facial feature point of the facial image based on an image recognition algorithm provided by itself, so that The position of the facial feature point of the target object in the original frame image is calibrated, that is, the position coordinates of the facial feature point in the facial image are calibrated. Further, the motion control device may acquire a feature position of the facial feature point in the original frame image after calibration.
  • the target object may be a real object, such as a character or an animal, etc.
  • the facial feature point may be a facial feature
  • the original frame image may be the target initially acquired by the motion control device.
  • the feature position of the facial feature point in the original frame image may be a coordinate point of the representative or position invariant selected from the facial feature point in the original frame image, for example, an eye corner in the eye
  • the coordinate point or the coordinate point of the outer corner of the eye or the nose point or the like is the coordinate point in the original frame image.
  • the motion control device may acquire feature calibration data of the facial feature point based on a feature location of the facial feature point of the target object in the original frame image.
  • the feature calibration data may be the The coordinate data of the coordinate point at which the feature position is located, for example, the coordinate value originating from the lower left corner of the face image, and the feature calibration data of the left eye (ie, the coordinates of the inner corner of the left eye in the original frame image) may be (3, 5).
  • the camera of the motion control device can capture the facial image of the target object at any time, and can use the currently captured facial image as the current frame image.
  • the motion control device can adopt a camera pair.
  • the target object performs real-time recording, and acquires a frame of the face image of the target object at a preset time interval during the recording process, and uses the face image as the current frame image of the current processing process; or the motion control device may adopt The camera acquires the facial image once every preset time interval, and takes the facial image acquired by the interval as the current frame image of the current processing.
  • the motion control device may perform image recognition on the current frame image, calibrate the facial feature points in the image, and acquire a feature position of the calibration feature point in the current frame image.
  • the feature position of the facial feature point in the current frame image may be a coordinate point of the representative or position invariant selected in the facial feature point in the current frame image, for example, A coordinate point of the inner corner of the eye or a coordinate point of the outer corner of the eye or a coordinate point of the nose tip or the like in the current frame image.
  • the motion control device may detect a feature position of a facial feature point of the target object in a current frame image and the target object by matching the original frame image and the current frame image. Whether the feature positions of the facial feature points in the original frame image match (for example, two frame images are overlapped to determine whether the same type of facial feature points coincide), thereby determining that the feature position of the facial feature points in the current frame image is relative to Whether the feature position in the original frame image changes.
  • the feature position of the face feature point of the target object in the current frame image does not match the feature position of the face feature point of the target object in the original frame image (for example, after two frames of images overlap, the same type of face exists)
  • the feature points do not coincide, and it can be determined that the feature position of the facial feature points has changed.
  • the movement of the head of the target object or the change of the facial expression may cause the feature position of the facial feature point to change, and may include the change of the feature position of the one or more facial feature points.
  • the motion control device may acquire feature calibration update data of the facial feature point based on a feature position of the facial feature point in a current frame image
  • the feature calibration update data may be a facial image of the target object
  • the coordinate data of the facial feature point in the current frame image for example, the feature calibration update data of the left eye (ie, the coordinates of the eye corner in the left eye in the current frame image) may be (5, 5) .
  • the motion control device may determine motion control data indicating the target object motion information based on the feature calibration data and the feature calibration update data. It can be understood that the motion control data may be motion data generated when the target object moves, for example, a rotation angle value or a rotation direction when the target object rotates the head.
  • the action control data may also be intermediate process data when the feature calibration data and the feature calibration update data are processed.
  • the feature calibration data may include at least one coordinate data (for example, , may be coordinate data of the inner corner of the left eye, coordinate data of the inner corner of the right eye, or coordinate data of the nose, etc.) when two coordinate data in the feature calibration data are selected, the two coordinate data may be acquired.
  • the coordinate length D1 similarly, may also obtain the updated coordinate length D2 according to the two coordinate data selected in the feature calibration update data, and D1 and D2 may be motion control data.
  • the two coordinate data selected in the feature calibration data and the two coordinate data selected in the feature calibration update data are the same type of coordinate data, for example, the coordinates of the inner corner of the left eye. Data and coordinate data of the inner corner of the eye in the right eye.
  • the motion control device may control the virtual object in the holographic projection to perform motion by using the motion indicated by the motion control data.
  • the motion control information may be a control including the motion control data. Information, for example, "shaking your head left and right", “smile", “nodding”, etc.
  • the virtual object may be a virtual object in the device resource stored in the motion control device, or may be a virtual object corresponding to the target object generated by the motion control device.
  • the virtual guest The experience presents a richer image. For example, when the device resource is a 3D resource, a 3D image can be presented in the motion control device.
  • the action of the virtual object in the holographic projection may be consistent with the action of the target object or may be in a mirror image relationship with the action of the target object.
  • the motion control device may use an internal speech recognition algorithm to identify data content referred to by the voice data emitted by the target object, and the voice data may be that the target object performs facial motion while The spoken voice, for example, "I am very happy now" when the target object smiles.
  • the motion control device may perform voice output according to the data content indicated by the voice data and control the virtual object. For example, the motion control device may control the virtual object output “I am very happy now”. .
  • controlling the virtual object to simulate the target object to complete the corresponding action increases the diversity of interaction.
  • the feature calibration data is an initial eye spacing
  • the feature calibration update data is an updated eye spacing after the target object face rotation
  • the calibration data is determined according to the feature.
  • updating the data with the feature, generating action control information for controlling the movement of the virtual object in the holographic projection, and implementing the motion control of the virtual object according to the action control information may include the following steps, as shown in FIG. 6 :
  • the initial eye spacing may be an eye spacing calculated according to feature calibration data in the original frame image (ie, coordinate data of an eye corner in the left eye and coordinate data of an eye corner in the right eye), for example, a left eye.
  • the inner corner coordinate is (3, 5) and the right eye inner corner coordinate is (4, 5), and the initial eye distance D1 is 1.
  • the updated eye interval may be an eye interval calculated according to feature calibration update data in the current frame image (ie, coordinate update data of an eye corner in the left eye and coordinate update data of an eye corner in the right eye), the update The eye spacing D2 can be 2.
  • the motion control device may acquire angle information of the target object's face rotation based on the initial eye distance and the updated eye distance, it being understood that the angle information includes a rotation direction and a rotation angle. value.
  • the target object's face rotation angle is set to ⁇ , and ⁇ is 60°;
  • the feature calibration data is the left eye inner corner coordinate data is (3, 5), and the feature calibration update data is the left eye inner corner update coordinate data ( 4, 5),
  • the direction of rotation of the head of the target object may be the direction indicated by the coordinate point (3, 5) to the coordinate point (4, 5), and the direction of rotation shown in FIG. 2 is to the right.
  • S302 Send an action control instruction that carries the angle information, and control a virtual object in the holographic projection to rotate the face according to a direction and an angle indicated by the angle information;
  • the motion control device may send a motion control command carrying the angle information to control a virtual object in the holographic projection to rotate the face according to a direction and an angle indicated by the angle information, for example, the motion control device sends The motion control command "Rotate the head to the right 60o" controls the virtual object in the holographic projection to rotate the head 60o to the right.
  • the motion control instruction for controlling the motion of the virtual object is generated according to the change of the eye spacing in the two frames of images, and the accuracy of the motion imitation of the virtual object on the target object is increased.
  • the feature calibration data of the facial feature point is acquired based on the feature position of the facial feature point of the target object in the original frame image, and the feature position of the facial feature point in the current frame image is compared with the feature in the original frame image.
  • the feature calibration update data of the facial feature point is acquired based on the feature position of the facial feature point in the current frame image, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the holographic projection is controlled by the action control information.
  • the virtual object moves.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed.
  • Imitation reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increases the authenticity of the interaction; controls the virtual object to output the target object while controlling the virtual object to mimic the target object to complete the corresponding action
  • the voice data increases the diversity of interaction; generates motion control commands for controlling the motion of the virtual object according to the change of the eye spacing in the two frames, and increases the accuracy of the virtual object to simulate the motion of the target object.
  • FIG. 7 to FIG. 10 is used to perform the method of the embodiment shown in FIG. 1 to FIG. 6 of the present application.
  • FIG. 7 to FIG. 10 is used to perform the method of the embodiment shown in FIG. 1 to FIG. 6 of the present application.
  • FIG. 7 to FIG. 10 is used to perform the method of the embodiment shown in FIG. 1 to FIG. 6 of the present application.
  • FIG. 7 to FIG. 10 is used to perform the method of the embodiment shown in FIG. 1 to FIG. 6 of the present application.
  • FIG. 7 to FIG. 10 is used to perform the method of the embodiment shown in FIG. 1 to FIG. 6 of the present application.
  • FIG. 7 is a schematic structural diagram of a motion control device according to an embodiment of the present application.
  • the motion control device 1 of the embodiment of the present application may include: a calibration data acquisition unit 11, an update data acquisition unit 12, and a motion control unit 13.
  • the calibration data acquiring unit 11 is configured to acquire feature calibration data of the facial feature point based on a feature position of the facial feature point of the target object in the original frame image;
  • the implementation structure of the motion control may be as shown in FIG. 2, and the processing chip is used as a core part of the processing module, and is respectively connected to the voice recognition module, the sound card, the face recognition module, and the graphics card, and the input of the processing module is Including resources, face images, and voice data, the processing module processes the face image and the voice data and can output to the speaker and project a holographic projection onto the holographic film through the laser head.
  • the voice recognition module and the face recognition module can respectively identify the input face image and voice data by using the stored voice recognition algorithm and the image recognition algorithm, and the card can identify the image after the image is recognized.
  • the obtained display information is processed and output to the laser head for projection, and the sound card can process the sound information obtained after the voice recognition and output it to the speaker.
  • the resource may be an audio or picture resource or the like stored in the motion control device.
  • the motion control device 1 may acquire a facial image of a target object in reality by using a camera, and then recognize a facial feature point (eg, facial features) of the facial image based on an image recognition algorithm provided by itself, and perform facial feature points on the facial feature point. Calibration, that is, calibrating the position coordinates of the facial feature points in the face image.
  • the target object may be a real object, such as a character or an animal.
  • the calibration data acquiring unit 11 may acquire feature calibration data of the facial feature points based on the feature positions of the facial feature points of the target object in the original frame image.
  • the facial feature points may be facial facial features.
  • the original frame image may be a face image of the target object initially acquired by the motion control device 1.
  • the feature position of the facial feature point in the original frame image may be a coordinate point of the representative or position invariant selected from the facial feature point in the original frame image, for example, an eye corner in the eye
  • the coordinate point or the coordinate point of the outer corner of the eye or the nose point or the like is the coordinate point in the original frame image.
  • the feature calibration data may be coordinate data of a coordinate point where the feature position is located, for example, a coordinate origin of the left image of the face image, and feature calibration data of the left eye (ie, a coordinate of the eye corner of the left eye in the original frame image) It can be (3, 5).
  • An update data obtaining unit 12 configured to: when the feature position of the facial feature point in the current frame image changes with respect to a feature position in the original frame image, based on the facial feature point in the current frame image Feature location acquiring feature calibration update data of the facial feature points;
  • the camera of the motion control device 1 can capture the facial image of the target object at any time, and can use the currently captured facial image as the current frame image.
  • the motion control device 1 can adopt The camera records the target object in real time, and acquires a frame image of the target object at a preset time interval during the recording process, and uses the face image as the current frame image of the current processing process; or the motion control device 1
  • the camera may be used to obtain the facial image once every preset time interval, and the facial image acquired by the interval is used as the current frame image of the current processing.
  • the motion control device 1 may perform image recognition on the current frame image, calibrate the facial feature points in the image, and acquire a feature position of the feature point in the current frame image.
  • the feature position of the facial feature point in the current frame image may be a coordinate point of the representative or position invariant selected in the facial feature point in the current frame image, for example, A coordinate point of the inner corner of the eye or a coordinate point of the outer corner of the eye or a coordinate point of the nose tip or the like in the current frame image.
  • the motion control device 1 may detect a feature position of a facial feature point of the target object in a current frame image and the target object by matching the original frame image and the current frame image. Whether the feature position of the facial feature point matches in the original frame image (for example, two frame images are coincident, and it is determined whether the same type of facial feature points coincide), thereby determining that the feature position of the facial feature point in the current frame image is relative to Whether the feature position in the original frame image changes.
  • the movement of the head of the target object or the change of the facial expression may cause a change in the feature position of the facial feature point, wherein the change of the feature position may be one or more The feature position of the facial feature points changes.
  • the facial image of the target object when the feature position of the facial feature point in the current frame image changes with respect to the feature position in the original frame image, the facial image of the target object may be considered to have rotated or an expression appears.
  • the change data acquisition unit 12 may acquire feature calibration update data of the facial feature point based on the feature position of the facial feature point in the current frame image, and the feature calibration update data may be a facial image of the target object.
  • the coordinate data of the facial feature point in the current frame image for example, the feature calibration update data of the left eye (ie, the coordinates of the eye corner in the left eye in the current frame image) may be (5, 5) .
  • the motion control unit 13 is configured to generate motion control information according to the feature calibration data and the feature calibration update data, and control the virtual object in the holographic projection to perform motion by using the motion control information;
  • the motion control unit 13 may generate motion control information according to the feature calibration data and the feature calibration update data. It may be understood that the motion control information may be information for controlling motion of a virtual object in a holographic projection. The action amplitude or direction of the virtual object may be included, for example, "turning to the right 30o", “smile", “nodding”, and the like. Further, the motion control unit 13 may control the virtual object in the holographic projection to perform motion by using the motion control information. It can be understood that the virtual object may be a virtual object in the device resource stored in the motion control device 1, or may be a virtual object corresponding to the target object generated by the motion control device 1, in the device resource. In the case of optimization, the virtual object will present a richer image. For example, when the device resource is a 3D resource, the motion control device 1 can present a 3D image.
  • the action of the virtual object in the holographic projection may be consistent with the action of the target object or may be in a mirror image relationship with the action of the target object.
  • the feature calibration data may include at least one coordinate data, for example, coordinate data of an eye corner in the left eye, coordinate data of an eye corner in the right eye, or coordinate data of a nose tip, etc., when the feature calibration data is selected.
  • the coordinate length D1 between the two coordinate data may be acquired, and similarly, the updated coordinate length D2 may be obtained according to the two coordinate data selected in the feature calibration update data.
  • the two coordinate data selected in the feature calibration data and the two coordinate data selected in the feature calibration update data are the same type of coordinate data, for example, coordinate data of the inner corner of the left eye and Coordinate data of the inner corner of the eye in the right eye.
  • the coordinate length D1 may be an eye interval in the original frame image (which may be a distance from the inner eye corner of the left eye to the eye contact in the right eye), and the coordinate length D2 may be the eye distance in the current frame image.
  • the motion control unit 13 can calculate the angle ⁇ of the target object's face rotation by using D1 and D2. For example, the direction of the target object's face rotation can be determined according to the coordinate direction between the feature calibration data and the feature calibration update data, for example, the feature calibration data.
  • the coordinate data of the inner corner of the left eye is (3, 5)
  • the coordinate data of the eye corner update in the left eye of the feature calibration update data is (5, 5)
  • the rotation direction of the head of the target object may be the coordinate point (3, 5) to the coordinate point.
  • (5, 5) indicates the direction, and the direction of rotation shown in Fig. 3 is to the right.
  • the motion control unit 13 may generate motion control information including the above-mentioned rotation angle and direction (for example, “rotate the head to the right at an angle ⁇ ”), and control the virtual object to implement the rotary motion as shown in FIG. 4 . .
  • the target object may simultaneously output a voice
  • the motion control device may identify voice data emitted by the target object, control the virtual object according to the data content indicated by the voice data Output.
  • the feature calibration data of the facial feature point is acquired based on the feature position of the facial feature point of the target object in the original frame image, and the feature position of the facial feature point in the current frame image is compared with the feature in the original frame image.
  • the feature calibration update data of the facial feature point is acquired based on the feature position of the facial feature point in the current frame image, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the holographic projection is controlled by the action control information.
  • the virtual object moves.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed. Imitation, reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increasing the authenticity of the interaction.
  • FIG. 8 is a schematic structural diagram of another motion control device according to an embodiment of the present application.
  • the motion control device 1 of the embodiment of the present application may include: a calibration data acquisition unit 11, an update data acquisition unit 12, a motion control unit 13, an original location acquisition unit 14, a current location acquisition unit 15, and a location.
  • the original position obtaining unit 14 is configured to acquire an original frame image of the target object, calibrate the position of the facial feature point of the target object in the original frame image, and obtain the facial feature point in the original after the calibration The feature position in the frame image;
  • the implementation structure of the motion control may be as shown in FIG. 2, and the processing chip is used as a core part of the processing module, and is respectively connected to the voice recognition module, the sound card, the face recognition module, and the graphics card, and the input of the processing module is Including resources, face images, and voice data, the processing module processes the face image and the voice data and can output to the speaker and project a holographic projection onto the holographic film through the laser head.
  • the voice recognition module and the face recognition module can respectively identify the input face image and voice data by using the stored voice recognition algorithm and the image recognition algorithm, and the card can identify the image after the image is recognized.
  • the obtained display information is processed and output to the laser head for projection, and the sound card can process the sound information obtained after the voice recognition and output it to the speaker.
  • the resource may be an audio or picture resource or the like stored in the motion control device.
  • the motion control device 1 may use a camera to acquire an original frame image of a target object in reality, that is, a face image of the target object, and may identify a facial feature point of the facial image based on an image recognition algorithm provided by itself, thereby
  • the position acquisition unit 14 may calibrate the position of the facial feature point of the target object in the original frame image, that is, to calibrate the position coordinates of the facial feature point in the facial image.
  • the original position acquiring unit 14 may acquire the feature position of the facial feature point in the original frame image after the calibration.
  • the target object may be a real object, such as a character or an animal, etc.
  • the facial feature point may be a facial feature
  • the original frame image may be the initial acquired by the motion control device 1
  • the feature position of the facial feature point in the original frame image may be a coordinate point of the representative or position invariant selected from the facial feature point in the original frame image, for example, an eye corner in the eye
  • the coordinate point or the coordinate point of the outer corner of the eye or the nose point or the like is the coordinate point in the original frame image.
  • the calibration data acquiring unit 11 is configured to acquire feature calibration data of the facial feature points based on feature positions of the facial feature points of the target object in the original frame image;
  • the calibration data acquiring unit 11 may acquire the feature calibration data of the facial feature point based on the feature position of the facial feature point of the target object in the original frame image, it may be understood that the feature calibration data may be the The coordinate data of the coordinate point at which the feature position is located, for example, the coordinate value originating from the lower left corner of the face image, and the feature calibration data of the left eye (ie, the coordinates of the inner corner of the left eye in the original frame image) may be (3, 5).
  • a current position obtaining unit 15 configured to acquire a feature position of a facial feature point of the target object in a current frame image
  • the camera of the motion control device 1 can capture the facial image of the target object at any time, and can use the currently captured facial image as the current frame image.
  • the motion control device 1 can adopt The camera records the target object in real time, and acquires a frame image of the target object at a preset time interval during the recording process, and uses the face image as the current frame image of the current processing process; or the motion control device 1
  • the camera may be used to obtain the facial image once every preset time interval, and the facial image acquired by the interval is used as the current frame image of the current processing.
  • the motion control device 1 may perform image recognition on the current frame image
  • the current position acquiring unit 15 may calibrate the facial feature points in the image, and acquire features of the calibrated rear feature points in the current frame image. position.
  • the feature position of the facial feature point in the current frame image may be a coordinate point of the representative or position invariant selected in the facial feature point in the current frame image, for example, A coordinate point of the inner corner of the eye or a coordinate point of the outer corner of the eye or a coordinate point of the nose tip or the like in the current frame image.
  • a position change determining unit 16 configured to determine the feature feature when a feature position of the face feature point of the target object in the current frame image does not match a feature position of the face feature point of the target object in the original frame image The feature position of the point has changed;
  • the motion control device 1 may detect a feature position of a facial feature point of the target object in a current frame image and the target object by matching the original frame image and the current frame image. Whether the feature position of the facial feature point matches in the original frame image (for example, two frame images are coincident, and it is determined whether the same type of facial feature points coincide), thereby determining that the feature position of the facial feature point in the current frame image is relative to Whether the feature position in the original frame image changes.
  • the position change determining unit 16 may determine that the feature position of the face feature points has changed.
  • the movement of the head of the target object or the change of the facial expression may cause the feature position of the facial feature point to change, and may include the change of the feature position of the one or more facial feature points.
  • An update data obtaining unit 12 configured to: when the feature position of the facial feature point in the current frame image changes with respect to a feature position in the original frame image, based on the facial feature point in the current frame image Feature location acquiring feature calibration update data of the facial feature points;
  • the facial image of the target object when the feature position of the facial feature point in the current frame image changes with respect to the feature position in the original frame image, the facial image of the target object may be considered to have rotated or an expression appears.
  • the change data acquisition unit 12 may acquire feature calibration update data of the facial feature point based on the feature position of the facial feature point in the current frame image, and the feature calibration update data may be a facial image of the target object.
  • the coordinate data of the facial feature point in the current frame image for example, the feature calibration update data of the left eye (ie, the coordinates of the eye corner in the left eye in the current frame image) may be (5, 5) .
  • the motion control unit 13 is configured to generate motion control information for controlling motion of the virtual object in the holographic projection according to the feature calibration data and the feature calibration update data, and implement motion control on the virtual object according to the motion control information. ;
  • the motion control unit 13 may generate motion control information for controlling motion of the virtual object in the holographic projection according to the feature calibration data and the feature calibration update data, and may implement the operation according to the motion control information. Motion control of virtual objects.
  • FIG. 9 is a schematic structural diagram of a motion control unit according to an embodiment of the present application.
  • the motion control unit 13 may include:
  • a data determining sub-unit 131 configured to determine, according to the feature calibration data and the feature calibration update data, action control data indicating the target object action information
  • the data determination sub-unit 131 may determine action control data indicating the target object action information based on the feature calibration data and the feature calibration update data. It can be understood that the motion control data may be motion data generated when the target object moves, for example, a rotation angle value or a rotation direction when the target object rotates the head.
  • the action control data may also be intermediate process data when the feature calibration data and the feature calibration update data are processed.
  • the feature calibration data may include at least one coordinate data (for example, , may be coordinate data of the inner corner of the left eye, coordinate data of the inner corner of the right eye, or coordinate data of the nose, etc.) when two coordinate data in the feature calibration data are selected, the two coordinate data may be acquired.
  • the coordinate length D1 similarly, may also obtain the updated coordinate length D2 according to the two coordinate data selected in the feature calibration update data, and D1 and D2 may be motion control data.
  • the two coordinate data selected in the feature calibration data and the two coordinate data selected in the feature calibration update data are the same type of coordinate data, for example, the coordinates of the inner corner of the left eye. Data and coordinate data of the inner corner of the eye in the right eye.
  • the motion control sub-unit 132 is configured to control the virtual object in the holographic projection to perform motion by using the motion indicated by the motion control data;
  • the motion control sub-unit 132 may control the virtual object in the holographic projection to perform motion by using the motion indicated by the motion control data.
  • the motion control information may be a control including the motion control data. Information, for example, "turn to the right 30o", “smile", “nod”, etc.
  • the virtual object may be a virtual object in the device resource stored in the motion control device, or may be a virtual object corresponding to the target object generated by the motion control device.
  • the virtual guest The experience presents a richer image. For example, when the device resource is a 3D resource, a 3D image can be presented in the motion control device.
  • the action of the virtual object in the holographic projection may be consistent with the action of the target object or may be in a mirror image relationship with the action of the target object.
  • the voice control unit 17 is configured to recognize voice data sent by the target object, perform voice output according to the data content indicated by the voice data, and control the virtual object;
  • the voice control unit 17 may identify the data content referred to by the voice data sent by the target object by using an internal voice recognition algorithm, and the voice data may be that the target object sends out while performing a facial motion.
  • the voice for example, said "I am very happy now" when the target object smiles.
  • the voice control unit 17 may perform voice output according to the data content indicated by the voice data and control the virtual object, for example, the voice control unit 17 may control the virtual object output "I am very Happy.”
  • controlling the virtual object to simulate the target object to complete the corresponding action increases the diversity of interaction.
  • the feature calibration data is an initial eye spacing
  • the feature calibration update data is an updated eye spacing after the target object is rotated, as shown in FIG. 10
  • the control unit can include:
  • the angle information acquiring sub-unit 133 is configured to acquire angle information of the face rotation of the target object based on the initial eye spacing and the updated eye spacing, where the angle information includes a rotation direction and a rotation angle value;
  • the initial eye spacing may be an eye spacing calculated according to feature calibration data in the original frame image (ie, coordinate data of an eye corner in the left eye and coordinate data of an eye corner in the right eye), for example, a left eye.
  • the inner corner coordinate is (3, 5) and the right eye inner corner coordinate is (4, 5), and the initial eye distance D1 is 1.
  • the updated eye interval may be an eye interval calculated according to feature calibration update data in the current frame image (ie, coordinate update data of an eye corner in the left eye and coordinate update data of an eye corner in the right eye), the update The eye spacing D2 can be 2.
  • the angle information acquisition sub-unit 133 may acquire angle information of the target object's face rotation based on the initial eye distance and the updated eye distance, it being understood that the angle information includes a rotation direction and a rotation.
  • Angle value For example, the target object's face rotation angle is set to ⁇ , and ⁇ is 60°; the feature calibration data is the left eye inner corner coordinate data is (3, 5), and the feature calibration update data is the left eye inner corner update coordinate data ( 4, 5), the direction of rotation of the head of the target object may be the direction indicated by the coordinate point (3, 5) to the coordinate point (4, 5), and the direction of rotation shown in FIG. 2 is to the right.
  • the rotation control sub-unit 134 is configured to send an action control instruction carrying the angle information, and control the virtual object in the holographic projection to rotate the face according to the direction and the angle indicated by the angle information;
  • the rotation control sub-unit 134 can transmit a motion control command carrying the angle information, and control the virtual object in the holographic projection to rotate the face according to the direction and angle indicated by the angle information, for example, the rotation control subunit 134 sends an action control command to "turn the head to the right 60o" to control the virtual object in the holographic projection to rotate the head 60o to the right.
  • an action control instruction for controlling the motion of the virtual object is generated according to the change of the eye distance in the two frames of images, which increases the accuracy of the virtual object to perform motion imitation on the target object.
  • the feature calibration data of the facial feature point is acquired based on the feature position of the facial feature point of the target object in the original frame image, and the feature position of the facial feature point in the current frame image is compared with the feature in the original frame image.
  • the feature calibration update data of the facial feature point is acquired based on the feature position of the facial feature point in the current frame image, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the holographic projection is controlled by the action control information.
  • the virtual object moves.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed.
  • Imitation reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increases the authenticity of the interaction; controls the virtual object to output the target object while controlling the virtual object to mimic the target object to complete the corresponding action
  • the voice data increases the diversity of interaction; generates motion control commands for controlling the motion of the virtual object according to the change of the eye spacing in the two frames, and increases the accuracy of the virtual object to simulate the motion of the target object.
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium may store a plurality of instructions, the instructions being adapted to be loaded by a processor and executing the method steps of the embodiment shown in FIG. 1 to FIG. 6 above.
  • the computer storage medium may store a plurality of instructions, the instructions being adapted to be loaded by a processor and executing the method steps of the embodiment shown in FIG. 1 to FIG. 6 above.
  • FIG. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the terminal 1000 may include at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, a memory 1005, and at least one communication bus 1002.
  • the communication bus 1002 is used to implement connection communication between these components.
  • the user interface 1003 can include a display and a keyboard.
  • the optional user interface 1003 can also include a standard wired interface and a wireless interface.
  • the network interface 1004 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the memory 1005 can also optionally be at least one storage device located remotely from the aforementioned processor 1001. As shown in FIG. 11, an operating system, a network communication module, a user interface module, and a motion control application may be included in the memory 1005 as a computer storage medium.
  • the user interface 1003 is mainly used to provide an input interface for the user to acquire data input by the user;
  • the network interface 1004 is used for data communication with the user terminal; and
  • the processor 1001 can be used to call the memory.
  • the processor 1001 is further configured to perform the following operations before performing the feature calibration data of the facial feature points in the original frame image based on the feature points of the target object-based facial feature points:
  • the processor 1001 performs, when the feature position of the facial feature point in the current frame image changes relative to the feature position in the original frame image, based on the facial feature point Before acquiring the feature calibration update data of the facial feature point, the feature position in the current frame image is further used to perform the following operations:
  • the processor 1001 generates motion control information according to the feature calibration data and the feature calibration update data, and controls the virtual object in the holographic projection to perform motion when the motion control information is used to perform specific execution. The following operations:
  • the virtual object in the holographic projection is controlled to move using the motion indicated by the motion control data.
  • the feature calibration data is an initial eye spacing
  • the feature calibration update data is an updated eye spacing after the target object face rotation
  • the processor 1001 performs calibration data according to the feature
  • the feature calibration update data generates action control information, and when the action control information is used to control the virtual object in the holographic projection to perform motion, the following operations are specifically performed:
  • angle information of the face rotation of the target object based on the initial eye spacing and the updated eye spacing, the angle information including a rotation direction and a rotation angle value;
  • the processor 1001 is further configured to:
  • Identifying voice data emitted by the target object controlling the virtual object to perform voice output according to the data content indicated by the voice data.
  • the feature calibration data of the facial feature point is acquired based on the feature position of the facial feature point of the target object in the original frame image, and the feature position of the facial feature point in the current frame image is compared with the feature in the original frame image.
  • the feature calibration update data of the facial feature point is acquired based on the feature position of the facial feature point in the current frame image, and then the action control information is generated according to the feature calibration data and the feature calibration update data, and the holographic projection is controlled by the action control information.
  • the virtual object moves.
  • the change of the feature position of the target feature facial feature point in different frame images is analyzed, and the motion control information of controlling the virtual corresponding motion is obtained, and the action of the virtual object on the target object in the control holographic projection is completed.
  • Imitation reducing the hardware cost of development, based on the realism provided by holographic projection improves the fidelity of the display effect, increases the authenticity of the interaction; controls the virtual object to output the target object while controlling the virtual object to mimic the target object to complete the corresponding action
  • the voice data increases the diversity of interaction; generates motion control commands for controlling the motion of the virtual object according to the change of the eye spacing in the two frames, and increases the accuracy of the virtual object to simulate the motion of the target object.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例公开一种运动控制方法及其设备、存储介质、终端,其中方法包括如下步骤:基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据;根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。

Description

一种运动控制方法及其设备、存储介质、终端
本申请要求于2017年11月23日提交中国专利局、申请号为201711185797.7、申请名称为“一种运动控制方法及其设备、存储介质、终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种运动控制方法及其设备、存储介质、终端。
背景
随着计算机技术的快速发展,基于智能手机、智能电视以及平板电脑等终端设备的虚拟互动类应用的开发已经成为了当下的热门技术,例如,体感互动类游戏等。现有技术中,终端设备中虚拟互动类应用的实现是基于现实人物佩戴的各类传感器获取人体的动作,然后将人体动作转换为动作控制指令控制设备中虚拟角色的运动,通过终端显示屏显示互动的展示效果。
技术内容
本申请实施例提供一种运动控制方法及其设备、存储介质、终端,通过识别现实中目标对象的面部图像,分析面部特征点的变化,控制全息投影中虚拟对象对目标对象的动作模仿,可以降低开发的硬件成本,基于全息投影提供的真实感可以提高显示效果的逼真度,增加互动的真实性。
本申请实施例提供了一种运动控制方法,可包括:
基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部 特征点的特征标定数据;
当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
本申请实施例还提供了一种运动控制设备,可包括:
标定数据获取单元,用于基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
更新数据获取单元,用于当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
运动控制单元,用于根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
本申请实施例还提供了一种计算机存储介质,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行以下步骤:
基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
本申请实施例还提供了一种终端设备,可包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行以下步骤:
基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
在本申请实施例中,基于目标对象的面部特征点在原帧图像中的特征位置获取面部特征点的特征标定数据,当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据,然后根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性。
附图简要说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技 术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种运动控制方法的流程示意图;
图2是本申请实施例提供的一种运控控制架构示意图;
图3是本申请实施例提供的一种转动方向获取示意图;
图4是本申请实施例提供的一种虚拟对象动作模仿效果示意图;
图5是本申请实施例提供的另一种运动控制方法的流程示意图;
图6是本申请实施例提供的另一种运动控制方法的流程示意图;
图7是本申请实施例提供的一种运动控制设备的结构示意图;
图8是本申请实施例提供的另一种运动控制设备的结构示意图;
图9是本申请实施例提供的一种运动控制单元的结构示意图;
图10是本申请实施例提供另一种运动控制单元的结构示意图;
图11是本申请实施例提供的一种终端的结构示意图。
实施本申请的方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的运动控制方法可以应用于全息投影中虚拟人物模仿现实人物动作的场景中,例如:运动控制设备基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征 位置获取所述面部特征点的特征标定更新数据,然后根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性。
本申请实施例涉及的运动控制设备可以是具备全息投影功能的便携性的智能盒子,所述全息投影可以是利用干涉和衍射原理记录并再现物体真实的三维图像的技术。
图1为本申请实施例的一种运动控制方法的流程示意图。如图1所示,本申请实施例的所述方法可以包括以下步骤S101-步骤S103。
S101,基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
在本申请实施例中,运动控制的实现架构可以如图2所示,处理芯片作为处理模块的核心部分,分别与语音识别模块、声卡、人脸识别模块和显卡相连,所述处理模块的输入包括资源、人脸图像和语音数据,所述处理模块对人脸图像和语音数据处理后可以输出至扬声器以及通过激光头投射至全息膜进行全息投影。可以理解的是,所述语音识别模块和所述人脸识别模块分别可以通过所存储的语音识别算法和图像识别算法对输入的人脸图像和语音数据进行识别,所述显卡可以对图像识别后得到的显示信息进行处理,输出至激光头进行投射,所述声卡可以对语音识别后得到的声音信息进行处理,输出至扬声器。所述资源可以是所述运动控制设备中存储的音频或图片资源等。
一些实施例中,运动控制设备可以采用摄像头获取现实中目标对象 的面部图像,再基于自身提供的图像识别算法识别出面部图像的面部特征点(例如,面部五官),并对面部特征点进行标定,即标定面部特征点在面部图像中的位置坐标。其中,所述目标对象可以是真实客体,例如,人物或动物等。
一些实施例中,所述运动控制设备可以基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据,可以理解的是,所述面部特征点可以是面部五官,所述原帧图像可以是所述运动控制设备初始获取到的所述目标对象的面部图像。所述面部特征点在原帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述原帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述原帧图像中的坐标点。所述特征标定数据可以是所述特征位置所处的坐标点的坐标数据,例如,以面部图像左下角为坐标原点,左眼的特征标定数据(即左眼内眼角在原帧图像中的坐标)可以是(3,5)。
S102,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
一些实施例中,所述运动控制设备的摄像头可以随时捕捉所述目标对象的面部图像,并将当前捕捉到的面部图像可以作为当前帧图像,可选的,所述运动控制设备可以采用摄像头对目标对象进行实时录像,并在录像过程中每隔预设的时间间隔获取一帧所述目标对象的面部图像,将该面部图像作为当前处理过程的当前帧图像;或者所述运动控制设备可以采用摄像头每隔预设的时间间隔获取一次所述面部图像,并将间隔获取到的面部图像作为当前处理过程的当前帧图像。进一步的,所述运动控制设备可以对所述当前帧图像进行图像识别,对图像中的面部特征 点进行标定,并获取标定后面部特征点在当前帧图像中的特征位置。其中,所述面部特征点在当前帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述当前帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述当前帧图像中的坐标点。
一些实施例中,所述运动控制设备可以通过将所述原帧图像和所述当前帧图像进行匹配,检测所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置是否匹配(例如,将两帧图像重合,判断同类型的面部特征点是否重合),从而判断所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置是否发生变化。
一些实施例中,所述目标对象的头部发生摇动或者面部表情发生变化等动作,都可以引起所述面部特征点的特征位置发生变化,其中,所述特征位置发生变化可以是一个或多个面部特征点的特征位置发生变化。
一些实施例中,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,可以认为所述目标对象的面部图像发生了转动或者表情出现了变化,所述运动控制设备可以基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据,所述特征标定更新数据可以是所述目标对象的面部图像发生变化后,所述面部特征点在所述当前帧图像中的坐标数据,例如,左眼的特征标定更新数据(即左眼内眼角在当前帧图像中的坐标)可以是(5,5)。
S103,根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动;
一些实施例中,所述运动控制设备可以根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,可以理解的是,所述动作控制信息可以是控制全息投影中虚拟对象进行运动的信息,可以包括所述虚拟对象的动作幅度或方向等,例如,“向右30o转头”、“微笑”、“点头”等。进一步的,所述运动控制设备可以采用所述动作控制信息控制全息投影中虚拟对象进行运动。可以理解的是,所述虚拟对象可以是所述运动控制设备中存储的设备资源中的虚拟客体,也可以是所述运动控制设备生成的所述目标对象对应的虚拟客体,在设备资源优化的情况下,虚拟客体会呈现更加丰富的影像,例如,设备资源为3D资源时,运动控制设备中可以呈现3D的影像。
一些实施例中,所述全息投影中的虚拟对象的动作可以与所述目标对象的动作一致,也可以与所述目标对象的动作呈镜像关系。
一些实施例中,所述特征标定数据可以包括至少一个坐标数据,例如,可以是左眼内眼角的坐标数据、右眼内眼角的坐标数据或者鼻尖的坐标数据等,当选取所述特征标定数据中的两个坐标数据时,可以获取所述两个坐标数据间的坐标长度D1,同样的,也可以根据在所述特征标定更新数据中选取的两个坐标数据,获取更新后的坐标长度D2,其中,在所述特征标定数据中选取的两个坐标数据和在所述特征标定更新数据中选取的两个坐标数据是相同类型的坐标数据,例如,都是左眼内眼角的坐标数据和右眼内眼角的坐标数据。
一些实施例中,上述坐标长度D1可以是原帧图像中的眼间距(可以是左眼内眼角到右眼内眼见间的距离),上述坐标长度D2可以是当前帧图像中的眼间距,所述运动控制设备可以利用D1和D2计算出目标对象面部转动的角度θ,例如,,可以根据特征标定数据到特征标定更新数据间的坐标方向确定目标对象面部转动的方向,例如,特征标定数 据左眼内眼角坐标数据为(3,5),特征标定更新数据左眼内眼角更新坐标数据为(5,5),则目标对象头部转动方向可以是坐标点(3,5)到坐标点(5,5)作指示的方向,如图3所示的转动方向为向右转动。进一步的,所述运动控制设备可生成包含上述转动角度和方向的动作控制信息(例如,“以θ角度向右转动头部”),控制虚拟对象实现如图4所示的转头动作。
一些实施例中,所述目标对象动作的同时可以输出语音,所述运动控制设备可以识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出。
在本申请实施例中,基于目标对象的面部特征点在原帧图像中的特征位置获取面部特征点的特征标定数据,当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据,然后根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性。
请参见图5,为本申请实施例提供了另一种运动控制方法的流程示意图。如图5所示,本申请实施例的所述方法可以包括以下步骤S201-步骤S208。
S201,采集目标对象的原帧图像,对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,并获取标定后所述面部特征点在所述原帧图像中的特征位置;
在本申请实施例中,运动控制的实现架构可以如图2所示,处理芯片作为处理模块的核心部分,分别与语音识别模块、声卡、人脸识别模块和显卡相连,所述处理模块的输入包括资源、人脸图像和语音数据,所述处理模块对人脸图像和语音数据处理后可以输出至扬声器以及通过激光头投射至全息膜进行全息投影。可以理解的是,所述语音识别模块和所述人脸识别模块分别可以通过所存储的语音识别算法和图像识别算法对输入的人脸图像和语音数据进行识别,所述显卡可以对图像识别后得到的显示信息进行处理,输出至激光头进行投射,所述声卡可以对语音识别后得到的声音信息进行处理,输出至扬声器。所述资源可以是所述运动控制设备中存储的音频或图片资源等。
一些实施例中,运动控制设备可以采用摄像头获取现实中目标对象的原帧图像即所述目标对象的面部图像,并可以基于自身提供的图像识别算法识别出面部图像的面部特征点,从而可以对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,即标定面部特征点在面部图像中的位置坐标。进一步的,所述运动控制设备可以获取标定后所述面部特征点在所述原帧图像中的特征位置。
一些实施例中,所述目标对象可以是真实客体,例如,人物或动物等,所述面部特征点可以是面部五官,所述原帧图像可以是所述运动控制设备初始获取到的所述目标对象的面部图像。所述面部特征点在原帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述原帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述原帧图像中的坐标点。
S202,基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
一些实施例中,所述运动控制设备可以基于目标对象的面部特征点 在原帧图像中的特征位置获取所述面部特征点的特征标定数据,可以理解的是,所述特征标定数据可以是所述特征位置所处的坐标点的坐标数据,例如,以面部图像左下角为坐标原点,左眼的特征标定数据(即左眼内眼角在原帧图像中的坐标)可以是(3,5)。
S203,获取所述目标对象的面部特征点在当前帧图像中的特征位置;
一些实施例中,所述运动控制设备的摄像头可以随时捕捉所述目标对象的面部图像,并将当前捕捉到的面部图像可以作为当前帧图像,可选的,所述运动控制设备可以采用摄像头对目标对象进行实时录像,并在录像过程中每隔预设的时间间隔获取一帧所述目标对象的面部图像,将该面部图像作为当前处理过程的当前帧图像;或者所述运动控制设备可以采用摄像头每隔预设的时间间隔获取一次所述面部图像,并将间隔获取到的面部图像作为当前处理过程的当前帧图像。进一步的,所述运动控制设备可以对所述当前帧图像进行图像识别,对图像中的面部特征点进行标定,并获取标定后面部特征点在当前帧图像中的特征位置。其中,所述面部特征点在当前帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述当前帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述当前帧图像中的坐标点。
S204,当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时,确定所述面部特征点的特征位置发生了变化;
一些实施例中,所述运动控制设备可以通过将所述原帧图像和所述当前帧图像进行匹配,检测所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置是否匹配(例如,将两帧图像重合,判断同类型的面部特征点是否重合), 从而判断所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置是否发生变化。当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时(例如,两帧图像重合后,存在同类型的面部特征点不重合),可以确定所述面部特征点的特征位置发生了变化。
一些实施例中,所述目标对象的头部发生摇动或者面部表情发生变化等动作,都可以引起所述面部特征点的特征位置发生变化,可以包括一个或多个面部特征点的特征位置发生变化。
S205,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
一些实施例中,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,可以认为所述目标对象的面部图像发生了转动或者表情出现了变化,所述运动控制设备可以基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据,所述特征标定更新数据可以是所述目标对象的面部图像发生变化后,所述面部特征点在所述当前帧图像中的坐标数据,例如,左眼的特征标定更新数据(即左眼内眼角在当前帧图像中的坐标)可以是(5,5)。
S206,基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据;
一些实施例中,所述运动控制设备可以基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据。可以理解的是,所述动作控制数据可以是所述目标对象运动时产生的运动数据,例如,目标对象转动头部时的转动角度值或转动方向。
一些实施例中,所述动作控制数据也可以是对所述特征标定数据和所述特征标定更新数据进行运算处理时的中间过程数据,例如,所述特征标定数据可以包括至少一个坐标数据(例如,可以是左眼内眼角的坐标数据、右眼内眼角的坐标数据或者鼻尖的坐标数据等)当选取所述特征标定数据中的两个坐标数据时,可以获取所述两个坐标数据间的坐标长度D1,同样的,也可以根据在所述特征标定更新数据中选取的两个坐标数据,获取更新后的坐标长度D2,则D1和D2可以是动作控制数据。需要说明的是,在所述特征标定数据中选取的两个坐标数据和在所述特征标定更新数据中选取的两个坐标数据是相同类型的坐标数据,例如,都是左眼内眼角的坐标数据和右眼内眼角的坐标数据。
S207,采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动;
一些实施例中,所述运动控制设备可以采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动,可以理解的是,所述动作控制信息可以是包含所述动作控制数据的控制信息,例如,“左右摇头”、“微笑”、“点头”等。所述虚拟对象可以是所述运动控制设备中存储的设备资源中的虚拟客体,也可以是所述运动控制设备生成的所述目标对象对应的虚拟客体,在设备资源优化的情况下,虚拟客体会呈现更加丰富的影像,例如,设备资源为3D资源时,运动控制设备中可以呈现3D的影像。
一些实施例中,所述全息投影中的虚拟对象的动作可以与所述目标对象的动作一致,也可以与所述目标对象的动作呈镜像关系。
S208,识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出;
一些实施例中,所述运动控制设备可以采用内部的语音识别算法识 别所述目标对象所发出的语音数据所指代的数据内容,所述语音数据可以是所述目标对象在进行面部动作的同时发出的语音,例如,所述目标对象在微笑时所说的“我现在很开心”。
一些实施例中,所述运动控制设备可以根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出,例如,所述运动控制设备可以控制所述虚拟对象输出“我现在很开心”。
在本申请实施例中,在控制所述虚拟对象模仿所述目标对象完成相应动作的同时,控制所述虚拟对象输出所述目标对象的语音数据,增加了互动的多样性。
在本申请实施例一种具体实现方式中,所述特征标定数据为初始眼间距,所述特征标定更新数据为所述目标对象脸部转动后的更新眼间距,所述根据所述特征标定数据和所述特征标定更新数据,生成控制全息投影中虚拟对象进行运动的动作控制信息,根据所述动作控制信息实现对所述虚拟对象的运动控制可以包括以下几个步骤,如图6所示:
S301,基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,所述角度信息包括转动方向和转动角度值;
一些实施例中,所述初始眼间距可以是根据所述原帧图像中特征标定数据(即左眼内眼角的坐标数据和右眼内眼角的坐标数据)计算得到的眼间距,例如,左眼内眼角坐标为(3,5)右眼内眼角坐标为(4,5),则初始眼间距D1为1。同样的,所述更新眼间距可以是根据所述当前帧图像中特征标定更新数据(即左眼内眼角的坐标更新数据和右眼内眼角的坐标更新数据)计算得到的眼间距,所述更新眼间距D2可以是2。
一些实施例中,所述运动控制设备可以基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,可以理解的是,所述角度信息包括转动方向和转动角度值。例如,所述目标对象脸部转动 角度设为θ,,计算得到θ为60o;特征标定数据左眼内眼角坐标数据为(3,5),特征标定更新数据左眼内眼角更新坐标数据为(4,5),则目标对象头部转动方向可以是坐标点(3,5)到坐标点(4,5)作指示的方向,如图2所示的转动方向为向右转动。
S302,发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部;
一些实施例中,所述运动控制设备可以发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部,例如,所述运动控制设备发送“将头部向右转动60o”的动作控制指令控制全息投影中虚拟对象将头部向右转动60o。
在本申请实施例中,根据两帧图像中眼间距的变化生成控制虚拟对象运动的动作控制指令,增加了虚拟对象对目标对象进行动作模仿的准确性。
在本申请实施例中,基于目标对象的面部特征点在原帧图像中的特征位置获取面部特征点的特征标定数据,当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据,然后根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性;在控制虚拟对象模仿目标对象完成相应动作的同时,控制虚拟对象输出目标对象的语音数据,增加了互动的多样性;根据两帧图像中眼间距的变 化生成控制虚拟对象运动的动作控制指令,增加了虚拟对象对目标对象进行动作模仿的准确性。
下面将结合附图7-附图10,对本申请实施例提供的运动控制设备进行详细介绍。需要说明的是,附图7-附图10所示的设备,用于执行本申请图1-图6所示实施例的方法,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请图1-图6所示的实施例。
请参见图7,为本申请实施例提供了一种运动控制设备的结构示意图。如图7所示,本申请实施例的所述运动控制设备1可以包括:标定数据获取单元11、更新数据获取单元12和运动控制单元13。
标定数据获取单元11,用于基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
在本申请实施例中,运动控制的实现架构可以如图2所示,处理芯片作为处理模块的核心部分,分别与语音识别模块、声卡、人脸识别模块和显卡相连,所述处理模块的输入包括资源、人脸图像和语音数据,所述处理模块对人脸图像和语音数据处理后可以输出至扬声器以及通过激光头投射至全息膜进行全息投影。可以理解的是,所述语音识别模块和所述人脸识别模块分别可以通过所存储的语音识别算法和图像识别算法对输入的人脸图像和语音数据进行识别,所述显卡可以对图像识别后得到的显示信息进行处理,输出至激光头进行投射,所述声卡可以对语音识别后得到的声音信息进行处理,输出至扬声器。所述资源可以是所述运动控制设备中存储的音频或图片资源等。
一些实施例中,运动控制设备1可以采用摄像头获取现实中目标对象的面部图像,再基于自身提供的图像识别算法识别出面部图像的面部特征点(例如,面部五官),并对面部特征点进行标定,即标定面部特 征点在面部图像中的位置坐标。其中,所述目标对象可以是真实客体,例如,人物或动物等。
一些实施例中,标定数据获取单元11可以基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据,可以理解的是,所述面部特征点可以是面部五官,所述原帧图像可以是所述运动控制设备1初始获取到的所述目标对象的面部图像。所述面部特征点在原帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述原帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述原帧图像中的坐标点。所述特征标定数据可以是所述特征位置所处的坐标点的坐标数据,例如,以面部图像左下角为坐标原点,左眼的特征标定数据(即左眼内眼角在原帧图像中的坐标)可以是(3,5)。
更新数据获取单元12,用于当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
一些实施例中,所述运动控制设备1的摄像头可以随时捕捉所述目标对象的面部图像,并将当前捕捉到的面部图像可以作为当前帧图像,可选的,所述运动控制设备1可以采用摄像头对目标对象进行实时录像,并在录像过程中每隔预设的时间间隔获取一帧所述目标对象的面部图像,将该面部图像作为当前处理过程的当前帧图像;或者所述运动控制设备1可以采用摄像头每隔预设的时间间隔获取一次所述面部图像,并将间隔获取到的面部图像作为当前处理过程的当前帧图像。进一步的,所述运动控制设备1可以对所述当前帧图像进行图像识别,对图像中的面部特征点进行标定,并获取标定后面部特征点在当前帧图像中的特征 位置。其中,所述面部特征点在当前帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述当前帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述当前帧图像中的坐标点。
一些实施例中,所述运动控制设备1可以通过将所述原帧图像和所述当前帧图像进行匹配,检测所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置是否匹配(例如,将两帧图像重合,判断同类型的面部特征点是否重合),从而判断所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置是否发生变化。
一些实施例中,所述目标对象的头部发生摇动或者面部表情发生变化等动作,都可以引起所述面部特征点的特征位置发生变化,其中,所述特征位置发生变化可以是一个或多个面部特征点的特征位置发生变化。
一些实施例中,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,可以认为所述目标对象的面部图像发生了转动或者表情出现了变化,更新数据获取单元12可以基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据,所述特征标定更新数据可以是所述目标对象的面部图像发生变化后,所述面部特征点在所述当前帧图像中的坐标数据,例如,左眼的特征标定更新数据(即左眼内眼角在当前帧图像中的坐标)可以是(5,5)。
运动控制单元13,用于根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动;
一些实施例中,运动控制单元13可以根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,可以理解的是,所述动作控制信息可以是控制全息投影中虚拟对象进行运动的信息,可以包括所述虚拟对象的动作幅度或方向等,例如,“向右30o转头”、“微笑”、“点头”等。进一步的,所述运动控制单元13可以采用所述动作控制信息控制全息投影中虚拟对象进行运动。可以理解的是,所述虚拟对象可以是所述运动控制设备1中存储的设备资源中的虚拟客体,也可以是所述运动控制设备1生成的所述目标对象对应的虚拟客体,在设备资源优化的情况下,虚拟客体会呈现更加丰富的影像,例如,设备资源为3D资源时,运动控制设备1中可以呈现3D的影像。
一些实施例中,所述全息投影中的虚拟对象的动作可以与所述目标对象的动作一致,也可以与所述目标对象的动作呈镜像关系。
一些实施例中,所述特征标定数据可以包括至少一个坐标数据,例如,可以是左眼内眼角的坐标数据、右眼内眼角的坐标数据或者鼻尖的坐标数据等,当选取所述特征标定数据中的两个坐标数据时,可以获取所述两个坐标数据间的坐标长度D1,同样的,也可以根据在所述特征标定更新数据中选取的两个坐标数据,获取更新后的坐标长度D2,其中,在所述特征标定数据中选取的两个坐标数据和在所述特征标定更新数据中选取的两个坐标数据是相同类型的坐标数据,例如,都是左眼内眼角的坐标数据和右眼内眼角的坐标数据。
一些实施例中,上述坐标长度D1可以是原帧图像中的眼间距(可以是左眼内眼角到右眼内眼见间的距离),上述坐标长度D2可以是当前帧图像中的眼间距,所述运动控制单元13可以利用D1和D2计算出目标对象面部转动的角度θ,例如,,可以根据特征标定数据到特征标定更新数据间的坐标方向确定目标对象面部转动的方向,例如,特征标 定数据左眼内眼角坐标数据为(3,5),特征标定更新数据左眼内眼角更新坐标数据为(5,5),则目标对象头部转动方向可以是坐标点(3,5)到坐标点(5,5)作指示的方向,如图3所示的转动方向为向右转动。进一步的,所述运动控制单元13可生成包含上述转动角度和方向的动作控制信息(例如,“以θ角度向右转动头部”),控制虚拟对象实现如图4所示的转头动作。
一些实施例中,所述目标对象动作的同时可以输出语音,所述运动控制设备可以识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出。
在本申请实施例中,基于目标对象的面部特征点在原帧图像中的特征位置获取面部特征点的特征标定数据,当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据,然后根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性。
请参见图8,为本申请实施例提供了另一种运动控制设备的结构示意图。如图8所示,本申请实施例的所述运动控制设备1可以包括:标定数据获取单元11、更新数据获取单元12、运动控制单元13、原位置获取单元14、当前位置获取单元15、位置变化确定单元16和语音控制单元17。
原位置获取单元14,用于采集目标对象的原帧图像,对所述目标对 象的面部特征点在所述原帧图像中的位置进行标定,并获取标定后所述面部特征点在所述原帧图像中的特征位置;
在本申请实施例中,运动控制的实现架构可以如图2所示,处理芯片作为处理模块的核心部分,分别与语音识别模块、声卡、人脸识别模块和显卡相连,所述处理模块的输入包括资源、人脸图像和语音数据,所述处理模块对人脸图像和语音数据处理后可以输出至扬声器以及通过激光头投射至全息膜进行全息投影。可以理解的是,所述语音识别模块和所述人脸识别模块分别可以通过所存储的语音识别算法和图像识别算法对输入的人脸图像和语音数据进行识别,所述显卡可以对图像识别后得到的显示信息进行处理,输出至激光头进行投射,所述声卡可以对语音识别后得到的声音信息进行处理,输出至扬声器。所述资源可以是所述运动控制设备中存储的音频或图片资源等。
一些实施例中,运动控制设备1可以采用摄像头获取现实中目标对象的原帧图像即所述目标对象的面部图像,并可以基于自身提供的图像识别算法识别出面部图像的面部特征点,从而原位置获取单元14可以对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,即标定面部特征点在面部图像中的位置坐标。进一步的,所述原位置获取单元14可以获取标定后所述面部特征点在所述原帧图像中的特征位置。
一些实施例中,所述目标对象可以是真实客体,例如,人物或动物等,所述面部特征点可以是面部五官,所述原帧图像可以是所述运动控制设备1初始获取到的所述目标对象的面部图像。所述面部特征点在原帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述原帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述原帧图像中的坐标点。
标定数据获取单元11,用于基于目标对象的面部特征点在原帧图像 中的特征位置获取所述面部特征点的特征标定数据;
一些实施例中,标定数据获取单元11可以基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据,可以理解的是,所述特征标定数据可以是所述特征位置所处的坐标点的坐标数据,例如,以面部图像左下角为坐标原点,左眼的特征标定数据(即左眼内眼角在原帧图像中的坐标)可以是(3,5)。
当前位置获取单元15,用于获取所述目标对象的面部特征点在当前帧图像中的特征位置;
一些实施例中,所述运动控制设备1的摄像头可以随时捕捉所述目标对象的面部图像,并将当前捕捉到的面部图像可以作为当前帧图像,可选的,所述运动控制设备1可以采用摄像头对目标对象进行实时录像,并在录像过程中每隔预设的时间间隔获取一帧所述目标对象的面部图像,将该面部图像作为当前处理过程的当前帧图像;或者所述运动控制设备1可以采用摄像头每隔预设的时间间隔获取一次所述面部图像,并将间隔获取到的面部图像作为当前处理过程的当前帧图像。进一步的,所述运动控制设备1可以对所述当前帧图像进行图像识别,当前位置获取单元15可以对图像中的面部特征点进行标定,并获取标定后面部特征点在当前帧图像中的特征位置。其中,所述面部特征点在当前帧图像中的特征位置,可以是在所述面部特征点中选取的具有代表性或位置不变性的某点在所述当前帧图像中的坐标点,例如,眼睛内眼角的坐标点或者外眼角的坐标点或者鼻尖等在所述当前帧图像中的坐标点。
位置变化确定单元16,用于当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时,确定所述面部特征点的特征位置发生了变化;
一些实施例中,所述运动控制设备1可以通过将所述原帧图像和所 述当前帧图像进行匹配,检测所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置是否匹配(例如,将两帧图像重合,判断同类型的面部特征点是否重合),从而判断所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置是否发生变化。当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时(例如,两帧图像重合后,存在同类型的面部特征点不重合),位置变化确定单元16可以确定所述面部特征点的特征位置发生了变化。
一些实施例中,所述目标对象的头部发生摇动或者面部表情发生变化等动作,都可以引起所述面部特征点的特征位置发生变化,可以包括一个或多个面部特征点的特征位置发生变化。
更新数据获取单元12,用于当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
一些实施例中,当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,可以认为所述目标对象的面部图像发生了转动或者表情出现了变化,更新数据获取单元12可以基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据,所述特征标定更新数据可以是所述目标对象的面部图像发生变化后,所述面部特征点在所述当前帧图像中的坐标数据,例如,左眼的特征标定更新数据(即左眼内眼角在当前帧图像中的坐标)可以是(5,5)。
运动控制单元13,用于根据所述特征标定数据和所述特征标定更新 数据,生成控制全息投影中虚拟对象进行运动的动作控制信息,根据所述动作控制信息实现对所述虚拟对象的运动控制;
一些实施例中,运动控制单元13可以根据所述特征标定数据和所述特征标定更新数据,生成控制全息投影中虚拟对象进行运动的动作控制信息,并可以根据所述动作控制信息实现对所述虚拟对象的运动控制。
请一并参考图9,为本申请实施例提供了运动控制单元的结构示意图。如图9所示,所述运动控制单元13可以包括:
数据确定子单元131,用于基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据;
一些实施例中,数据确定子单元131可以基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据。可以理解的是,所述动作控制数据可以是所述目标对象运动时产生运动数据,例如,目标对象转动头部时的转动角度值或转动方向。
一些实施例中,所述动作控制数据也可以是对所述特征标定数据和所述特征标定更新数据进行运算处理时的中间过程数据,例如,所述特征标定数据可以包括至少一个坐标数据(例如,可以是左眼内眼角的坐标数据、右眼内眼角的坐标数据或者鼻尖的坐标数据等)当选取所述特征标定数据中的两个坐标数据时,可以获取所述两个坐标数据间的坐标长度D1,同样的,也可以根据在所述特征标定更新数据中选取的两个坐标数据,获取更新后的坐标长度D2,则D1和D2可以是动作控制数据。需要说明的是,在所述特征标定数据中选取的两个坐标数据和在所述特征标定更新数据中选取的两个坐标数据是相同类型的坐标数据,例如,都是左眼内眼角的坐标数据和右眼内眼角的坐标数据。
运动控制子单元132,用于采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动;
一些实施例中,运动控制子单元132可以采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动,可以理解的是,所述动作控制信息可以是包含所述动作控制数据的控制信息,例如,“向右30o转头”、“微笑”、“点头”等。所述虚拟对象可以是所述运动控制设备中存储的设备资源中的虚拟客体,也可以是所述运动控制设备生成的所述目标对象对应的虚拟客体,在设备资源优化的情况下,虚拟客体会呈现更加丰富的影像,例如,设备资源为3D资源时,运动控制设备中可以呈现3D的影像。
一些实施例中,所述全息投影中的虚拟对象的动作可以与所述目标对象的动作一致,也可以与所述目标对象的动作呈镜像关系。
语音控制单元17,语音识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出;
一些实施例中,语音控制单元17可以采用内部的语音识别算法识别所述目标对象所发出的语音数据所指代的数据内容,所述语音数据可以是所述目标对象在进行面部动作的同时发出的语音,例如,所述目标对象在微笑时所说的“我现在很开心”。
一些实施例中,所述语音控制单元17可以根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出,例如,所述语音控制单元17可以控制所述虚拟对象输出“我现在很开心”。
在本申请实施例中,在控制所述虚拟对象模仿所述目标对象完成相应动作的同时,控制所述虚拟对象输出所述目标对象的语音数据,增加了互动的多样性。
在本申请实施例一种具体实现方式中,所述特征标定数据为初始眼间距,所述特征标定更新数据为所述目标对象脸部转动后的更新眼间距,如图10所示所述运动控制单元可以包括:
角度信息获取子单元133,用于基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,所述角度信息包括转动方向和转动角度值;
一些实施例中,所述初始眼间距可以是根据所述原帧图像中特征标定数据(即左眼内眼角的坐标数据和右眼内眼角的坐标数据)计算得到的眼间距,例如,左眼内眼角坐标为(3,5)右眼内眼角坐标为(4,5),则初始眼间距D1为1。同样的,所述更新眼间距可以是根据所述当前帧图像中特征标定更新数据(即左眼内眼角的坐标更新数据和右眼内眼角的坐标更新数据)计算得到的眼间距,所述更新眼间距D2可以是2。
一些实施例中,角度信息获取子单元133可以基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,可以理解的是,所述角度信息包括转动方向和转动角度值。例如,所述目标对象脸部转动角度设为θ,,计算得到θ为60o;特征标定数据左眼内眼角坐标数据为(3,5),特征标定更新数据左眼内眼角更新坐标数据为(4,5),则目标对象头部转动方向可以是坐标点(3,5)到坐标点(4,5)作指示的方向,如图2所示的转动方向为向右转动。
转动控制子单元134,用于发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部;
一些实施例中,转动控制子单元134可以发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部,例如,所述转动控制子单元134发送“将头部向右转动60o”的动作控制指令控制全息投影中虚拟对象将头部向右转动60o。
在本申请实施例中,根据两帧图像中眼间距的变化生成控制虚拟对象运动的动作控制指令,增加了虚拟对象对目标对象进行动作模仿的准 确性。
在本申请实施例中,基于目标对象的面部特征点在原帧图像中的特征位置获取面部特征点的特征标定数据,当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据,然后根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性;在控制虚拟对象模仿目标对象完成相应动作的同时,控制虚拟对象输出目标对象的语音数据,增加了互动的多样性;根据两帧图像中眼间距的变化生成控制虚拟对象运动的动作控制指令,增加了虚拟对象对目标对象进行动作模仿的准确性。
本申请实施例还提供了一种计算机存储介质,所述计算机存储介质可以存储有多条指令,所述指令适于由处理器加载并执行如上述图1-图6所示实施例的方法步骤,具体执行过程可以参见图1-图6所示实施例的具体说明,在此不进行赘述。
请参见图11,为本申请实施例提供了一种终端的结构示意图。如图11所示,所述终端1000可以包括:至少一个处理器1001,例如CPU,至少一个网络接口1004,用户接口1003,存储器1005,至少一个通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。其中,用户接口1003可以包括显示屏(Display)、键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口 1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器1005可选的还可以是至少一个位于远离前述处理器1001的存储装置。如图11所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及运动控制应用程序。
在图11所示的终端1000中,用户接口1003主要用于为用户提供输入的接口,获取用户输入的数据;网络接口1004用于与用户终端进行数据通信;而处理器1001可以用于调用存储器1005中存储的运动控制应用程序,并具体执行以下操作:
基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
在一个实施例中,所述处理器1001在执行基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据之前,还用于执行以下操作:
采集目标对象的原帧图像,对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,并获取标定后所述面部特征点在所述原帧图像中的特征位置。
在一个实施例中,所述处理器1001在执行当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时, 基于所述面部特征点在所述当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据之前,还用于执行以下操作:
获取所述目标对象的面部特征点在当前帧图像中的特征位置;
当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时,确定所述面部特征点的特征位置发生了变化。
在一个实施例中,所述处理器1001在执行根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动时,具体执行以下操作:
基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据;
采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动。
在一个实施例中,所述特征标定数据为初始眼间距,所述特征标定更新数据为所述目标对象脸部转动后的更新眼间距,所述处理器1001在执行根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动时,具体执行以下操作:
基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,所述角度信息包括转动方向和转动角度值;
发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部。
在一个实施例中,所述处理器1001,还用于执行以下操作:
识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出。
在本申请实施例中,基于目标对象的面部特征点在原帧图像中的特征位置获取面部特征点的特征标定数据,当面部特征点在当前帧图像中的特征位置相对于在原帧图像中的特征位置发生变化时,基于面部特征点在当前帧图像中的特征位置获取面部特征点的特征标定更新数据,然后根据特征标定数据和特征标定更新数据生成动作控制信息,并采用动作控制信息控制全息投影中虚拟对象进行运动。通过识别现实中目标对象的面部图像,分析现实中目标对象面部特征点在不同帧图像中特征位置的变化,获得控制虚拟对应运动的动作控制信息,完成控制全息投影中虚拟对象对目标对象的动作模仿,降低了开发的硬件成本,基于全息投影提供的真实感提高了显示效果的逼真度,增加了互动的真实性;在控制虚拟对象模仿目标对象完成相应动作的同时,控制虚拟对象输出目标对象的语音数据,增加了互动的多样性;根据两帧图像中眼间距的变化生成控制虚拟对象运动的动作控制指令,增加了虚拟对象对目标对象进行动作模仿的准确性。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (19)

  1. 一种运动控制方法,由终端设备执行,包括:
    基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
    当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
    根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
  2. 如权利要求1所述的方法,其中,所述基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据之前,还包括:
    采集目标对象的原帧图像,对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,并获取标定后所述面部特征点在所述原帧图像中的特征位置。
  3. 如权利要求1所述的方法,其中,所述当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在所述当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据之前,还包括:
    获取所述目标对象的面部特征点在当前帧图像中的特征位置;
    当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时,确定所述面部特征点的特征位置发生了变化。
  4. 如权利要求1所述的方法,其中,所述根据所述特征标定数据和 所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动,包括:
    基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据;
    采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动。
  5. 如权利要求1所述的方法,其中,所述特征标定数据为初始眼间距,所述特征标定更新数据为所述目标对象脸部转动后的更新眼间距;
    所述根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动,包括:
    基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,所述角度信息包括转动方向和转动角度值;
    发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部。
  6. 如权利要求1所述的方法,进一步包括:
    识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出。
  7. 一种运动控制设备,包括:
    标定数据获取单元,用于基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
    更新数据获取单元,用于当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
    运动控制单元,用于根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
  8. 如权利要求7所述的设备,进一步包括:
    原位置获取单元,用于采集目标对象的原帧图像,对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,并获取标定后所述面部特征点在所述原帧图像中的特征位置。
  9. 如权利要求7所述的设备,进一步包括:
    当前位置获取单元,用于获取所述目标对象的面部特征点在当前帧图像中的特征位置;
    位置变化确定单元,用于当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目标对象的面部特征点在原帧图像中的特征位置不匹配时,确定所述面部特征点的特征位置发生了变化。
  10. 如权利要求7所述的设备,其中,所述运动控制单元包括:
    数据确定子单元,用于基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据;
    运动控制子单元,用于采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动。
  11. 如权利要求7所述的设备,其中,所述特征标定数据为初始眼间距,所述特征标定更新数据为所述目标对象脸部转动后的更新眼间距;
    所述运动控制单元包括:
    角度信息获取子单元,用于基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,所述角度信息包括转动方向和转动角度值;
    转动控制子单元,用于发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部。
  12. 如权利要求7所述的设备,进一步包括:
    语音控制单元,语音识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出。
  13. 一种计算机存储介质,存储有多条指令,所述指令适于由处理器加载并执行如权利要求1~6任意一项的方法步骤。
  14. 一种终端设备,包括:处理器和存储器;其中,所述存储器存储有计算机可读指令,所述计算机可读指令可以使所述处理器:
    基于目标对象的面部特征点在原帧图像中的特征位置获取所述面部特征点的特征标定数据;
    当所述面部特征点在当前帧图像中的特征位置相对于在所述原帧图像中的特征位置发生变化时,基于所述面部特征点在当前帧图像中的特征位置获取所述面部特征点的特征标定更新数据;
    根据所述特征标定数据和所述特征标定更新数据生成动作控制信息,并采用所述动作控制信息控制全息投影中虚拟对象进行运动。
  15. 如权利要求14所述的终端设备,其中,所述计算机可读指令可以使所述处理器:
    采集目标对象的原帧图像,对所述目标对象的面部特征点在所述原帧图像中的位置进行标定,并获取标定后所述面部特征点在所述原帧图像中的特征位置。
  16. 如权利要求14所述的终端设备,其中,所述计算机可读指令可以使所述处理器:
    获取所述目标对象的面部特征点在当前帧图像中的特征位置;
    当所述目标对象的面部特征点在当前帧图像中的特征位置与所述目 标对象的面部特征点在原帧图像中的特征位置不匹配时,确定所述面部特征点的特征位置发生了变化。
  17. 如权利要求14所述的终端设备,其中,所述计算机可读指令可以使所述处理器:
    基于所述特征标定数据和所述特征标定更新数据确定指示所述目标对象动作信息的动作控制数据;
    采用所述动作控制数据所指示的动作控制全息投影中虚拟对象进行运动。
  18. 如权利要求14所述的终端设备,其中,所述特征标定数据为初始眼间距,所述特征标定更新数据为所述目标对象脸部转动后的更新眼间距;
    所述计算机可读指令可以使所述处理器:
    基于所述初始眼间距和所述更新眼间距获取所述目标对象脸部转动的角度信息,所述角度信息包括转动方向和转动角度值;
    发送携带所述角度信息的动作控制指令,控制全息投影中虚拟对象按照所述角度信息指示的方向和角度转动脸部。
  19. 如权利要求14所述的终端设备,其中,所述计算机可读指令可以使所述处理器:
    语音识别所述目标对象所发出的语音数据,根据所述语音数据指示的数据内容并控制所述虚拟对象进行语音输出。
PCT/CN2018/114008 2017-11-23 2018-11-05 一种运动控制方法及其设备、存储介质、终端 WO2019100932A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711185797.7A CN107831902B (zh) 2017-11-23 2017-11-23 一种运动控制方法及其设备、存储介质、终端
CN201711185797.7 2017-11-23

Publications (1)

Publication Number Publication Date
WO2019100932A1 true WO2019100932A1 (zh) 2019-05-31

Family

ID=61653474

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114008 WO2019100932A1 (zh) 2017-11-23 2018-11-05 一种运动控制方法及其设备、存储介质、终端

Country Status (2)

Country Link
CN (1) CN107831902B (zh)
WO (1) WO2019100932A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107831902B (zh) * 2017-11-23 2020-08-25 腾讯科技(上海)有限公司 一种运动控制方法及其设备、存储介质、终端
WO2019205015A1 (en) * 2018-04-25 2019-10-31 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for shaking action recognition based on facial feature points
CN108905193B (zh) * 2018-07-03 2022-04-15 百度在线网络技术(北京)有限公司 游戏操控处理方法、设备及存储介质
CN109726673B (zh) * 2018-12-28 2021-06-25 北京金博星指纹识别科技有限公司 实时指纹识别方法、系统及计算机可读存储介质
CN111435546A (zh) * 2019-01-15 2020-07-21 北京字节跳动网络技术有限公司 模型动作方法、装置、带屏音箱、电子设备及存储介质
CN111514584B (zh) * 2019-02-01 2022-07-26 北京市商汤科技开发有限公司 游戏控制方法及装置、游戏终端及存储介质
CN110058685B (zh) * 2019-03-20 2021-07-09 北京字节跳动网络技术有限公司 虚拟对象的显示方法、装置、电子设备和计算机可读存储介质
CN112784622B (zh) * 2019-11-01 2023-07-25 抖音视界有限公司 图像的处理方法、装置、电子设备及存储介质
CN111249728B (zh) * 2020-01-22 2021-08-31 荣耀终端有限公司 一种图像处理方法、装置及存储介质
CN111768479B (zh) * 2020-07-29 2021-05-28 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备以及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908226A (zh) * 2010-08-16 2010-12-08 北京水晶石数字科技有限公司 一种三维动画制作系统
US20110248987A1 (en) * 2010-04-08 2011-10-13 Disney Enterprises, Inc. Interactive three dimensional displays on handheld devices
CN104008564A (zh) * 2014-06-17 2014-08-27 河北工业大学 一种人脸表情克隆方法
CN104883557A (zh) * 2015-05-27 2015-09-02 世优(北京)科技有限公司 实时全息投影方法、装置及系统
CN107831902A (zh) * 2017-11-23 2018-03-23 腾讯科技(上海)有限公司 一种运动控制方法及其设备、存储介质、终端

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7205991B2 (en) * 2002-01-25 2007-04-17 Autodesk, Inc. Graphical user interface widgets viewable and readable from multiple viewpoints in a volumetric display
JP2009267556A (ja) * 2008-04-23 2009-11-12 Seiko Epson Corp 画像処理装置、画像処理方法、およびプログラム
KR101694820B1 (ko) * 2010-05-07 2017-01-23 삼성전자주식회사 사용자 위치 인식 방법 및 장치
AU2013205535B2 (en) * 2012-05-02 2018-03-15 Samsung Electronics Co., Ltd. Apparatus and method of controlling mobile terminal based on analysis of user's face
CN103760980A (zh) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 根据双眼位置进行动态调整的显示方法、系统及显示设备
CN106407882A (zh) * 2016-07-26 2017-02-15 河源市勇艺达科技股份有限公司 机器人通过人脸检测实现头部转动的方法及装置
CN106296784A (zh) * 2016-08-05 2017-01-04 深圳羚羊极速科技有限公司 一种通过人脸3d数据,进行面部3d装饰物渲染的算法
CN106354264A (zh) * 2016-09-09 2017-01-25 电子科技大学 基于视线追踪的实时人机交互系统及其工作方法
CN106502075A (zh) * 2016-11-09 2017-03-15 微美光速资本投资管理(北京)有限公司 一种全息投影方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110248987A1 (en) * 2010-04-08 2011-10-13 Disney Enterprises, Inc. Interactive three dimensional displays on handheld devices
CN101908226A (zh) * 2010-08-16 2010-12-08 北京水晶石数字科技有限公司 一种三维动画制作系统
CN104008564A (zh) * 2014-06-17 2014-08-27 河北工业大学 一种人脸表情克隆方法
CN104883557A (zh) * 2015-05-27 2015-09-02 世优(北京)科技有限公司 实时全息投影方法、装置及系统
CN107831902A (zh) * 2017-11-23 2018-03-23 腾讯科技(上海)有限公司 一种运动控制方法及其设备、存储介质、终端

Also Published As

Publication number Publication date
CN107831902A (zh) 2018-03-23
CN107831902B (zh) 2020-08-25

Similar Documents

Publication Publication Date Title
WO2019100932A1 (zh) 一种运动控制方法及其设备、存储介质、终端
KR102565755B1 (ko) 얼굴의 특징점의 움직임에 따라 모션이 수행된 아바타를 표시하는 전자 장치와 이의 동작 방법
US10489959B2 (en) Generating a layered animatable puppet using a content stream
JP7286684B2 (ja) 顔に基づく特殊効果発生方法、装置および電子機器
US10853677B2 (en) Verification method and system
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
US20190156574A1 (en) Method and system for real-time control of three-dimensional models
US9789403B1 (en) System for interactive image based game
JP7268071B2 (ja) バーチャルアバターの生成方法及び生成装置
WO2022000755A1 (zh) 机器人及其行动控制方法、装置和计算机可读存储介质
JP2019012526A (ja) 映像処理方法、コンピュータプログラムおよび記録媒体
TW201733345A (zh) 使用互動化身的通訊技術(二)
JP7422876B2 (ja) 拡張現実に基づいた表示方法及び装置、並びに記憶媒体
CN109144252B (zh) 对象确定方法、装置、设备和存储介质
JP2023517121A (ja) 画像処理及び画像合成方法、装置及びコンピュータプログラム
US20220319231A1 (en) Facial synthesis for head turns in augmented reality content
US20170213392A1 (en) Method and device for processing multimedia information
CN112669422A (zh) 仿真3d数字人生成方法、装置、电子设备及存储介质
US20220292690A1 (en) Data generation method, data generation apparatus, model generation method, model generation apparatus, and program
US11756251B2 (en) Facial animation control by automatic generation of facial action units using text and speech
CN112767520A (zh) 数字人生成方法、装置、电子设备及存储介质
US10325408B2 (en) Method and device for presenting multimedia information
KR20200071008A (ko) 2차원 이미지 처리 방법 및 이 방법을 실행하는 디바이스
US20230154126A1 (en) Creating a virtual object response to a user input
RU2801917C1 (ru) Способ и устройство для отображения изображений на основе дополненной реальности и носитель для хранения информации

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18881249

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18881249

Country of ref document: EP

Kind code of ref document: A1