WO2020147791A1 - 图像处理方法及装置、图像设备及存储介质 - Google Patents
图像处理方法及装置、图像设备及存储介质 Download PDFInfo
- Publication number
- WO2020147791A1 WO2020147791A1 PCT/CN2020/072520 CN2020072520W WO2020147791A1 WO 2020147791 A1 WO2020147791 A1 WO 2020147791A1 CN 2020072520 W CN2020072520 W CN 2020072520W WO 2020147791 A1 WO2020147791 A1 WO 2020147791A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- type
- information
- connecting part
- movement
- limb
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- This application relates to image processing methods and devices, image equipment and storage media.
- somatosensory games require users to wear special somatosensory equipment to detect activities of their own limbs and so on in order to control the game character.
- the face or body of the user is completely exposed to the network. This may involve the user's privacy issue on the one hand, and the information security issue on the other hand.
- the facial image may be covered by mosaics, etc., but this will affect the video effect.
- the embodiments of the present application expect to provide an image processing method and device, image equipment, and storage medium.
- the technical solution of the present application is achieved in this way.
- the present disclosure provides an image processing method, including: acquiring an image; acquiring features of body limbs based on the image, wherein the limbs include: upper limbs and/or lower limbs; The first type of movement information of the limb; according to the first type of movement information, the movement of the limb of the controlled model is controlled.
- determining the first type of movement information of the limb includes: detecting position information of key points of the limb in the image; and determining the first type of movement information according to the position information.
- Class sports information is a classification of key points of the limb in the image.
- the method further includes: detecting the position information of the key points of the body skeleton in the image; acquiring the features of the limbs of the body based on the image, including: based on the body skeleton The position information of the key point determines the position information of the key point of the limb.
- determining the first type of motion information according to the position information includes: determining in the image that the first part of the limb is included according to the position information of the key points of the limb Position frame; based on the position frame, detecting the position information of the key points of the first part; based on the position information of the key points of the first part, obtaining the first type of motion information of the first part.
- determining the position frame containing the first part of the limb in the image according to the position information of the key points of the limb includes: according to the position information of the key points of the hand, The position frame containing the hand is determined in the image.
- detecting the position information of the key points of the first part based on the position frame includes: detecting the position information of the key points corresponding to the knuckles on the hand based on the position frame and / Or the location information of the key points corresponding to the fingertips.
- obtaining the first type of motion information of the first part based on the position information of the key points of the first part includes: the position information based on the key points of the first part To obtain the movement information of the fingers of the hand.
- determining the first type of movement information of the limb further includes: determining the first part of the second part of the limb according to the position information of the key points of the limb Class sports information.
- the method further includes: determining the second type of movement information of the connecting part based on the position information of the two local key points connected by the connecting part in the limb.
- the method further includes: determining the second type of motion information of the connecting part according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part, wherein the at least two parts include The two parts connected by the connecting part; controlling the movement of the connecting part of the controlled model according to the second type of motion information.
- controlling the movement of the connecting part of the controlled model according to the second type of motion information includes: determining a control method for controlling the connecting part according to the type of the connecting part; The control method and the second type of motion information control the motion of the connecting part of the controlled model.
- determining the control method for controlling the connecting portion according to the type of the connecting portion includes: determining that the control method is the first type of connecting portion when the connecting portion is the first type of connecting portion A type of control method, wherein the first type of control method is used to directly control the movement of the connecting portion corresponding to the first type of connecting portion in the controlled model.
- determining the control method for controlling the connecting portion according to the type of the connecting portion includes: determining that the control method is the second type when the connecting portion is a second type connecting portion A control method, wherein the second type of control method is used to indirectly control the movement of the connecting part corresponding to the second type of connecting part in the controlled model, and the indirect control is by controlling the controlled model Corresponds to the partial realization of the part other than the second type of connection.
- controlling the movement of the connecting part of the controlled model according to the control mode and the second type of motion information includes: when the control mode is the second type of control mode Decomposing the second type of motion information to obtain the first type of rotation information in which the connecting part is rotated by the traction part; adjusting the motion information of the traction part according to the first type of rotation information; The movement information of the traction part after adjustment is used to control the movement of the traction part in the controlled model to indirectly control the movement of the connecting part.
- the method further includes: decomposing the second type of motion information to obtain the second type of rotation information of the second type of connecting part rotating relative to the traction part; and using the second type of rotation information , Controlling the rotation of the connecting part of the controlled model relative to the traction part.
- the first type of connecting part includes: an elbow; a knee; the second type of connecting part includes: a wrist; an ankle.
- the traction portion corresponding to the wrist includes: upper arm and/or forearm; when the second type connecting portion is an ankle, corresponding The traction part on the ankle includes: calf and/or thigh.
- acquiring the feature of the limb of the body based on the image includes: acquiring the first 2D coordinates of the limb based on the 2D image; and determining the feature of the limb based on the feature
- the first type of motion information includes: obtaining a first 3D coordinate corresponding to the first 2D coordinate based on the first 2D coordinate and the conversion relationship from the 2D coordinate to the 3D coordinate.
- acquiring the feature of the limb of the body includes: acquiring the second 3D coordinates of the skeleton key point of the limb based on the 3D image; and determining the feature based on the feature
- the first type of movement information of the limb includes: obtaining third 3D coordinates based on the second 3D coordinates.
- the obtaining third 3D coordinates based on the second 3D coordinates includes: adjusting, based on the second 3D coordinates, the 3D coordinates of the skeleton key points corresponding to the occluded part of the limb in the 3D image , Thereby obtaining the third 3D coordinate.
- the first type of motion information includes: a quaternion.
- the present disclosure provides an image processing device, including: a first acquisition module for acquiring an image; a second acquisition module for acquiring features of a body limb based on the image, wherein the limb includes : Upper limbs and/or lower limbs; a first determination module, used to determine the first type of movement information of the limb based on the characteristics; a control module, used to control all of the controlled model based on the first type of movement information Describe the movement of the limbs.
- the present disclosure provides an image device including: a memory; a processor connected to the memory and configured to execute computer executable instructions located on the memory to implement image processing provided by any of the above technical solutions method.
- the present disclosure provides a non-volatile computer storage medium that stores computer-executable instructions; after the computer-executable instructions are executed by a processor, the image provided by any of the above technical solutions can be realized Approach.
- the image processing method provided by the embodiments of the present application can collect the movements of the limbs of the subject through image collection, and then control the movements of the limbs of the controlled model.
- the controlled model can simulate the movement of the collection object, such as the user, so as to realize video teaching, video speech, live broadcast or game control, etc.
- the collection object can be hidden, thereby protecting the privacy of users and improving information security.
- FIG. 1 is a schematic flowchart of a first image processing method provided by an embodiment of the disclosure.
- FIG. 2 is a schematic diagram of a flow of detecting first-type motion information of a first part according to an embodiment of the disclosure.
- FIG. 3 is a schematic diagram of key points of a hand provided by an embodiment of the disclosure.
- FIG. 4 is a schematic flowchart of a second image processing method provided by an embodiment of the disclosure.
- 5A to 5C are schematic diagrams of a controlled model provided in this embodiment for simulating changes of the collected user's hand movement.
- 6A to 6C are schematic diagrams of a controlled model provided by an embodiment of the disclosure simulating changes of the torso movement of a collected user.
- FIG. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
- FIG. 8A is a schematic diagram of a key point provided by an embodiment of the disclosure.
- FIG. 8B is a schematic diagram of another key point provided by an embodiment of the disclosure.
- FIG. 8C is a schematic diagram of the hierarchical relationship between the first type of nodes provided by the embodiments of the disclosure.
- FIG. 9 is a schematic diagram of constructing a local coordinate system according to an embodiment of the disclosure.
- FIG. 10 is a schematic structural diagram of an image device provided by an embodiment of the disclosure.
- this embodiment provides an image processing method, which includes the following steps S110 to S140.
- Step S110 Obtain an image.
- Step S120 Based on the image, obtain the features of the limbs of the body.
- the limbs include upper limbs and/or lower limbs.
- Step S130 Determine the first type of movement information of the limb based on the characteristic.
- Step S140 Control the movement of the limbs of the controlled model according to the first type of movement information.
- the image processing method provided in this embodiment can drive the movement of the controlled model through image processing.
- the image processing method provided in this embodiment can be applied to an image device, which can be various electronic devices capable of image processing, for example, an electronic device that performs image collection, image display, and image pixel recombination.
- the image device includes, but is not limited to, various terminal devices, such as mobile terminals and/or fixed terminals, and may also include various servers capable of providing image services.
- the mobile terminal includes portable devices such as mobile phones or tablet computers that are easy for users to carry, and may also include devices worn by users, such as smart bracelets, smart watches, or smart glasses.
- the fixed terminal includes a fixed desktop computer and the like.
- the image acquired in step S110 may be a 2D (two-dimensional) image or a 3D (three-dimensional) image.
- the 2D image may include images collected by a single-lens or multi-lens camera, such as red, green, and blue (RGB) images.
- the way of acquiring the image may include any of the following: using the camera of the imaging device to collect the image; receiving the image from an external device; reading the image from the local database or local storage.
- the 3D image may be a 2D coordinate detected from a 2D image, and then a 3D image obtained by a conversion algorithm from 2D coordinates to 3D coordinates, and the 3D image may also be an image collected by a 3D camera.
- the acquired image may be one frame of image or multiple frames of image.
- the subsequently obtained movement information may reflect the movement of the limb in the current image relative to the corresponding limb in the initial coordinate system (also referred to as the camera coordinate system).
- the subsequently obtained motion information may reflect the movement of the limbs in the current image relative to the corresponding limbs in the previous frames, or the subsequently obtained motion information may also reflect the current image.
- the limbs of the body may include upper limbs and/or lower limbs.
- the characteristics of the upper limb and/or the lower limb can be detected, and in step S130, the first type of movement information based on the characteristics is obtained to at least characterize the movement change of the limb.
- a deep learning model such as a neural network can be used to detect the characteristics of the image.
- the controlled model can be the model corresponding to the target.
- the controlled model is a body model; if the target is an animal, the controlled model may be a body model of the corresponding animal.
- the controlled model is a model for the category of the target.
- the model can be predetermined.
- the style of the controlled model may be determined based on user instructions, and may include simulated real-life style, anime style, internet celebrity style, literary style, rock style, game style, etc. In the game style, the controlled model can be a game character.
- the user's body movements can be directly transferred to the controlled model.
- the control of the limb movement of the controlled object can be easily realized without the user wearing the somatosensory device.
- the above method can not only protect user privacy, but also ensure the effect that users want.
- the step S130 may include: detecting position information of key points of the limb in the image; and determining the first type of motion information according to the position information.
- the position information of the key points of the limb includes, but is not limited to, the position information of the skeleton key points and/or the position information of the outline key points.
- the information of the key points of the skeleton is information of the key points of the skeleton
- the position information of the key points of the outline is the position information of the key points of the outer surface of the limb.
- the first type of motion information can be easily and quickly determined.
- the first type of motion information can be directly used as a motion parameter for driving the motion of the controlled model.
- the method further includes detecting position information of key points of the body skeleton in the image. Detecting the position information of the key points of the limbs in the image includes: determining the position information of the key points of the limbs based on the position information of the key points of the body skeleton.
- a deep learning model such as a neural network can be used to obtain the position information of the key points of the body skeleton, so that the position of the key points of the limbs can be determined based on the distribution of the position information of the key points of the whole body. information. After obtaining the position information of the key points of the whole body, connect these key points to obtain the skeleton. Based on the relative distribution positions of the bones and joints in the skeleton, it is possible to determine which key points are the key points of the limbs, thereby determining the key points of the limbs Location information.
- the position of the limb may be first identified through human body recognition, and only the position information of the key point is extracted for the position of the limb.
- the step S130 may include the following steps.
- Step S131 According to the position information of the key points of the limb, a position frame containing the first part of the limb is determined in the image.
- Step S132 Detect the position information of the key point of the first part based on the position frame.
- Step S133 Obtain the first type of motion information of the first part based on the position information of the key points of the first part.
- the movement of the limbs can be further subdivided.
- the movement of the upper limb includes not only the movement of the upper arm and the movement of the lower arm (also called the forearm), but also the movement of the fingers of the hand with more subtle movements.
- the first part may be a hand or a foot.
- the image area containing the first part can be framed from the image. For this image area, obtain further position information of key points.
- the step S131 may include: determining a position frame containing the hand in the image according to the position information of the key points of the hand.
- the position frame can be a rectangular frame or a non-rectangular frame.
- the first part is the hand
- the position frame may be a hand-shaped frame that matches the shape of the hand.
- the hand shape position frame can be generated.
- the key points of the skeleton of the hand are detected to obtain a rectangular circumscribing frame containing all the key points of the skeleton.
- the circumscribed frame is a regular rectangular frame.
- the step S132 may include: detecting the position information of the key points corresponding to the knuckle joints of the hand and/or the position information of the key points corresponding to the fingertips of the fingers based on the position frame.
- Figure 3 shows a schematic diagram of the key points of a hand.
- Fig. 3 there are 20 key points of the hand, which are the key points of the knuckles of the 5 fingers and the key points of the fingertips; these key points are the key points P1 to P20 in Fig. 3, respectively.
- the determination of the position information of the key points of the finger joints can control the movement of each finger joint of the controlled model.
- the determination of the position information of the key points of the fingertips can control the control of the fingertip movement of the controlled model, thereby realizing the process of limb migration More refined control in the middle.
- the step S130 may include: obtaining movement information of the fingers of the hand based on the position information.
- the second part here is a part of the limb other than the first part.
- the second part may include the lower arm and the upper arm between the elbow and shoulder joints.
- the first type of movement information of the second part can be obtained directly based on the position information of the key points of the limbs.
- the respective first-type motion information of different parts are obtained in different ways, so as to achieve precise control of different parts of the limbs in the controlled model.
- the method further includes: determining the second type of movement information of the connecting part based on the position information of the two local key points connected by the connecting part in the limb.
- the step S140 may further include the following steps.
- Step S141 Determine the second type of motion information of the connecting part according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part, wherein the at least two parts include the Two parts.
- Step S142 Control the movement of the connecting part of the controlled model according to the second type of movement information.
- the motion information of these connecting parts may be inconvenient to directly detect or rely on other adjacent parts to a certain extent, so the second type can be determined according to the other parts connected to it.
- Sports information may be inconvenient to directly detect or rely on other adjacent parts to a certain extent, so the second type can be determined according to the other parts connected to it.
- the S142 further includes: determining a control mode according to the type of the connection part; based on the control mode, controlling the movement of the connection part corresponding to the controlled model.
- the lateral rotation of the wrist such as the extension of the upper arm to the hand
- the lateral rotation of the wrist is the axis of rotation
- the lateral rotation of the wrist is caused by the rotation of the upper arm.
- the lateral rotation of the ankle if taking the extension direction of the calf as an axis to rotate, the rotation of the ankle is also directly driven by the calf.
- determining the control method for controlling the connecting portion according to the type of the connecting portion includes: determining that the control method is the first type when the connecting portion is the first type of connecting portion.
- Type control method wherein the first type control method is used to directly control the movement of the connecting part corresponding to the first type connecting part in the controlled model
- the rotation of the first type connecting portion is not driven by other parts.
- the second connecting portion is a connecting portion other than the first type connecting portion in the connecting portion, and the rotation of the second type connecting portion includes rotation generated by other local traction.
- determining the control method for controlling the connecting portion according to the type of the connecting portion includes: determining that the control method is the second type when the connecting portion is a second type connecting portion A control method, wherein the second type of control method is used to indirectly control the movement of the connecting part corresponding to the second type of connecting part in the controlled model, and the indirect control is by controlling the controlled model Corresponds to the partial realization of the part other than the second type of connection.
- the parts other than the second-type connecting portion include but are not limited to: the part directly connected to the second-type connecting portion, or the part indirectly connected to the second-type connecting portion.
- the entire upper limb may be moving, and the shoulders and elbows are rotating. In this way, the rotation of the wrist can be indirectly controlled by controlling the lateral rotation of the shoulder and/or elbow.
- controlling the movement of the connecting part of the controlled model according to the control mode and the second type of motion information includes: when the control mode is the second type of control mode, decomposing The second type of motion information is used to obtain the first type of rotation information in which the connecting part is pulled by the traction part to rotate the connecting part; according to the first type of rotation information, the motion information of the traction part is adjusted; The latter movement information of the traction part controls the movement of the traction part in the controlled model to indirectly control the movement of the connecting part.
- the first type of rotation information is not the rotation information generated by the movement of the second type of connecting part itself, but the second type is pulled by the movement of other parts connected with the second type of connecting part (that is, the traction part).
- the connection part makes the movement information generated by the second type connection part relative to a specific reference point of the target (for example, the center of the human body).
- the traction part is a part directly connected with the second type connecting part.
- the traction part may be from the forearm to the shoulder above the wrist.
- the traction part may be the calf above the ankle to the root of the thigh.
- the lateral rotation of the wrist along the straight line from the shoulder, the elbow to the wrist may be caused by the rotation of the shoulder or the rotation of the elbow.
- the movement information is detected, it is caused by the movement of the wrist, so that the lateral rotation information of the wrist should be assigned to the elbow or shoulder.
- the adjustment of the movement information of the elbow or shoulder is realized.
- the method further includes: decomposing the second type of motion information to obtain the second type of rotation information of the second type of connecting part rotating relative to the traction part; using the second type of rotation Information to control the rotation of the connecting part of the controlled model relative to the traction part.
- the movement information of the second type of connecting portion relative to a predetermined posture can be known through the characteristics of the second type of connecting portion, for example, 2D coordinates or 3D coordinates, and the movement information is called second movement information.
- the second type of motion information includes but is not limited to rotation information.
- the first type of rotation information may be information obtained by extracting the information model of the rotation information directly according to the characteristics of the image, and the second type of rotation information is the rotation information obtained by adjusting the first type of rotation information.
- the first type of connecting portion includes: elbows and knees.
- the second type of connecting portion includes: wrists and ankles.
- the traction portion corresponding to the wrist includes: an upper arm and/or a forearm. If the second type of connecting part is an ankle, the traction part corresponding to the ankle includes: calf and/or thigh.
- the first type of connecting portion includes a neck connecting the head and the torso.
- determining the second type of motion information of the connecting part according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part includes: according to the characteristics of the at least two parts , Determine the orientation information of the at least two parts; determine the candidate orientation information of the connecting part according to the orientation information of the at least two parts; determine the candidate orientation information and the first motion constraint condition The second type of movement information of the connecting part.
- determining the candidate orientation information of the connecting part according to the orientation information of at least two parts includes: determining the first candidate orientation and the second orientation information of the connecting part according to the orientation information of the at least two parts. Two alternative directions.
- Two included angles may be formed between two local orientation information.
- the included angle that satisfies the first motion constraint condition is used as the second type of motion information.
- the first motion constraint condition connecting the human face and the neck of the torso is: between -90 and 90 degrees, and the angle exceeding 90 degrees is excluded according to the first motion constraint condition. In this way, it is possible to reduce the occurrence of abnormal situations in which the rotation angle exceeds 90 degrees clockwise or counterclockwise, for example, 120 degrees, 180 degrees, when the controlled model simulates the movement of the target.
- the corresponding neck orientation may be 90 degrees to the right or 270 degrees to the left.
- the change of the neck orientation of the human body may not be 270 degrees left So that the neck faces right.
- the orientation of the neck is: 90 degrees to the right and 270 degrees to the left are candidate orientation information, and the orientation information of the neck needs to be further determined, which needs to be determined according to the aforementioned first motion constraint condition.
- the neck 90 degrees to the right is the target orientation information of the neck
- the current second type of movement information of the neck is 90 degrees to the right.
- determining the second type of movement information of the connecting portion according to the candidate orientation information and the first movement constraint condition includes: determining from the first candidate orientation information and the second candidate In the orientation information, the target orientation information located within the bounds of the orientation change is selected; and the second type of motion information is determined according to the target orientation information.
- the target orientation information here is the information that satisfies the first motion constraint condition.
- determining the orientation information of the at least two parts according to the characteristics of the at least two parts includes: acquiring a first key point and a second key point of each part of the at least two parts Acquire the first reference point of each of the at least two parts, wherein the first reference point is a predetermined key point in the target; based on the first key point and the first reference point A first vector is generated, and a second vector is generated based on the second key point and the first reference point; based on the first vector and the second vector, the value of each of the at least two parts is determined Towards information.
- the first reference point of the first part may be the waist key point of the target or the midpoint of the key points of the two hips. If the second part of the two parts is a human face, the first reference point of the second part may be a connection point between the neck and shoulders of the human face.
- determining the orientation information of each of the at least two parts based on the two vectors includes: cross-multiplying the first vector and the second vector of a part to obtain the part The normal vector of the plane where it is located; the normal vector is used as the local orientation information.
- the orientation of the plane where the part is located is also determined.
- determining the movement information of the connecting portion based on the at least two local movement information includes: acquiring a third 3D coordinate of the connecting portion relative to a second reference point; The third 3D coordinate obtains the absolute rotation information of the connecting part; controlling the movement of the connecting part of the controlled model according to the motion information of the connecting part includes: controlling the receiving part based on the absolute rotation information Control the movement of the connecting part of the model.
- the second reference point may be one of the key points of the stent of the target, and the target is a person as an example, and the second reference point may be a local key point connected by the first type of connecting portion.
- the second reference point may be the key point of the shoulder connected to the neck.
- the second reference point may be the same as the first reference point.
- both the first reference point and the second reference point may be the root node of the human body, and the root node of the human body may be the human crotch.
- the root node includes but is not limited to the key point 0 shown in FIG. 8A.
- Fig. 8A is a schematic diagram of the skeleton of the human body. Fig. 8A includes 17 skeleton joint points numbered 0-16.
- controlling the movement of the connecting part of the controlled model based on the absolute rotation information further includes: decomposing the traction hierarchy relationship among the plurality of connecting parts in the target Absolute rotation information to obtain relative rotation information; based on the relative rotation information, to control the movement of the connection part of the controlled model.
- controlling the movement of the connecting part of the controlled model based on the absolute rotation information further includes: correcting the relative rotation information according to a second constraint condition; and controlling the relative rotation information based on the relative rotation information.
- the movement of the connecting part of the controlled model includes: controlling the movement of the connecting part of the controlled model based on the corrected relative rotation information.
- the second motion constraint condition includes: a rotatable angle of the connecting portion.
- the method further includes: performing posture defect correction on the second type of motion information to obtain corrected second type of motion information; and controlling the controlled model according to the second type of motion information
- the movement of the connecting part includes: using the corrected second-type movement information to control the movement of the connecting part of the controlled model.
- posture defect correction can be performed on the second type of motion information to obtain the corrected second type of motion information.
- the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the user's hand is moving, from FIG. 5A to FIG. 5B, and then from FIG. 5B to FIG. 5C, the user's hand is moving, and the hand of the controlled model is also moving.
- the user's hand movement changes from making a fist, extending the palm, and extending the index finger sequentially in FIGS. 5A to 5C, while the controlled model imitates the user's gestures changing from making a fist, extending the palm, and extending the index finger.
- the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the user steps his leg to the right of the image, steps his leg to the left of the image, and finally stands up straight; the controlled model also simulates the user's foot movement.
- the step S120 may include: obtaining the first 2D coordinates of the limb based on the 2D image; the step S130 may include: based on the first 2D coordinates and the transformation relationship from 2D coordinates to 3D coordinates To obtain a first 3D coordinate corresponding to the first 2D coordinate.
- 2D coordinates are coordinates in a plane coordinate system, and 3D coordinates are coordinates in a 3D coordinate system. 2D coordinates can represent the coordinates of key points in the plane, while 3D coordinates represent the coordinates in three-dimensional space.
- the conversion relationship can be various types of preset conversion functions. For example, taking the position of the image acquisition module as the virtual viewpoint, and setting the virtual 3D space corresponding to the predetermined distance between the acquisition target and the image acquisition module, By projecting the 2D coordinates into the 3D space, the first 3D coordinates corresponding to the first 2D coordinates can be obtained.
- the step S120 may include: obtaining the second 3D coordinates of the skeleton key points of the limb based on the 3D image; the step S130 may include: obtaining the third 3D coordinates based on the second 3D coordinates coordinate.
- the 3D image directly acquired in step S110 includes: a 2D image and a depth image corresponding to the 2D image.
- the 2D image can provide the coordinate value of the skeleton key point in the xoy plane
- the depth value in the depth image can provide the coordinate of the skeleton key point on the z axis.
- the z axis is perpendicular to the xoy plane.
- the 3D coordinates of the skeleton key points corresponding to the occluded parts of the limbs in the 3D image are adjusted to obtain the third 3D coordinates.
- obtaining third 3D coordinates based on the second 3D coordinates includes: adjusting, based on the second 3D coordinates, the 3D coordinates of the skeleton key points corresponding to the occluded parts of the limbs in the 3D image, so as to obtain the The third 3D coordinate.
- the position of the knees of both legs is the same in the depth image.
- the knee closer to the image acquisition module blocks the knee relatively far from the image acquisition module.
- Using deep learning models or machine learning models to adjust the 3D coordinates of the key points of the skeleton can obtain 3D coordinates that more accurately represent the first type of motion information.
- the first type of motion information includes: a quaternion.
- the quaternion can be used to accurately characterize the spatial position of the second type of local and/or the rotation in each direction.
- it can also be represented by coordinates such as Euler coordinates or Lagrangian coordinates.
- this embodiment provides an image processing device, which includes: a first acquisition module 110 for acquiring an image; a second acquisition module 120 for acquiring features of body limbs based on the image, where , The limb includes: upper limbs and/or lower limbs; a first determining module 130, configured to determine the first type of movement information of the limb based on the characteristics; a control module 140, configured to determine the first type of movement information of the limb , Control the movement of the limbs of the controlled model.
- the first acquisition module 110, the second acquisition module 120, the first determination module 130, and the control module 140 may be program modules. After the program modules are executed by the processor, they can realize the speed of the image and the body The feature determination, the determination of the first type of motion information and the motion control of the controlled model.
- the first acquisition module 110, the second acquisition module 120, the first determination module 130, and the control module 140 may be a combination of software and hardware; the combination of software and hardware may include various programmable arrays. ;
- the programmable array includes but is not limited to a field programmable array or a complex programmable array.
- the first acquisition module 110, the second acquisition module 120, the first determination module 130, and the control module 140 may be pure hardware modules, and the pure hardware modules include but are not limited to application specific integrated circuits.
- the first determining module 130 is specifically configured to detect position information of key points of limbs in the image; and determine the first type of motion information according to the position information.
- the device further includes: a detection module for detecting the position information of key points of the body skeleton in the image; the second acquisition module 120 is specifically configured to be based on the key points of the body skeleton The location information is to determine the location information of the key points of the limb.
- the first determining module 130 is specifically configured to determine a position frame containing the first part of the limb in the image according to the position information of the key points of the limb; based on the position frame Detecting the position information of the key points of the first part; and obtaining the first-type motion information of the first part based on the position information of the key points of the first part.
- the first determining module 130 is specifically configured to determine the position frame containing the hand in the image according to the position information of the key points of the hand.
- the first determining module 130 is further configured to detect the position information of the key points corresponding to the knuckles of the hand and/or the key points corresponding to the fingertips based on the position frame. location information.
- the first determining module 130 is further configured to obtain movement information of the fingers of the hand based on the position information of the key points of the first part.
- the first determining module 130 is further configured to determine the first type of movement information of the second part of the limb according to the position information of the key points of the limb.
- the device further includes: a second determining module for determining the second type of the connecting part based on the position information of the two local key points connected by the connecting part in the limb Sports information.
- the device further includes a connecting part control module, configured to determine the first motion constraint condition of the at least two local connecting parts according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part.
- the second type of motion information according to the second type of motion information, the motion of the connecting part of the controlled model is controlled.
- the connecting part control module is further configured to determine a control method for controlling the connecting part according to the type of the connecting part; control all the connecting parts according to the control method and the second type of motion information. Describe the movement of the connected part of the controlled model.
- the connecting part control module is further configured to determine that the control mode is the first type of control mode when the connecting part is the first type of connecting part, wherein the first type The control mode is used to directly control the movement of the connecting part corresponding to the first type of connecting part in the controlled model.
- connection part control module is further configured to determine that the control mode is a second type control mode when the connection part is a second type connection part, wherein the second type The control mode is used to indirectly control the movement of the connection part corresponding to the second type connection part in the controlled model, and the indirect control is by controlling the second type connection part in the controlled model The local realization of the outside.
- the connecting part control module is further configured to decompose the second type of motion information in the case that the control mode is the second type of control mode, so as to obtain that the connecting part is traction
- the first type of rotation information about the rotation of the connecting part is pulled by the part; the motion information of the traction part is adjusted according to the first type of rotation information; the control model is controlled by using the adjusted motion information of the traction part
- the movement of the middle traction part indirectly controls the movement of the connecting part.
- the connecting part control module is further used to decompose the second type of motion information to obtain the second type of rotation information of the second type of connecting part rotating relative to the traction part;
- the second type of rotation information controls the rotation of the connecting part of the controlled model relative to the traction part.
- the first type of connecting part includes: elbow; knee; the second type of connecting part includes: wrist; ankle.
- the traction portion corresponding to the wrist includes: upper arm or forearm; and/or, if the second type connecting portion is an ankle, corresponding to the The traction part of the ankle includes: thigh or calf.
- the second acquiring module 120 is specifically configured to acquire the first 2D coordinates of the limb based on a 2D image; the first determining module 130 is configured to acquire based on the first 2D coordinates and 2D coordinates. The transformation relationship from coordinates to 3D coordinates is obtained, and the first 3D coordinates corresponding to the first 2D coordinates are obtained.
- the second acquiring module 120 is specifically configured to acquire the second 3D coordinates of the skeleton key points of the limb based on the 3D image; the first determining module 130 is specifically configured to acquire the second 3D coordinates based on the first The second 3D coordinate obtains the third 3D coordinate.
- the first determining module 130 is specifically configured to adjust the 3D coordinates of the skeleton key points corresponding to the occluded part of the limb in the 3D image based on the second 3D coordinates, so as to obtain the second 3D coordinates. Three 3D coordinates.
- the first type of motion information includes: a quaternion.
- This example provides an image processing method, including: a camera collects a picture, for each picture, first detects a portrait, then detects the key points of the human hand and wrist, and then determines the position frame of the hand based on the key points. After that, 14 key points and 63 contour points of the human skeleton are obtained. After detecting the key points, the position of the hand is known, and then the hand frame is calculated.
- the hand frame here corresponds to the aforementioned position frame.
- the hand frame does not include the wrist, but in some cases, such as in an inclined position, it can also include part of the wrist.
- the key point may be equivalent to the center point of the joint between the hand and the arm.
- the 2D coordinates of the points can be input into the existing neural network to calculate the 3D coordinates of the points.
- the input of the network is 2D points
- the output is 3D points.
- the bending angle of the arm can be calculated, and then the angle can be assigned to the controlled model, such as the Avatar model.
- the Avatar model will do the same action.
- the depth information that is, the depth value of the z-axis
- TOF information is the original information of the depth information
- the image coordinate system of the 2D image that is, the xoy plane, plus the depth value of the z axis
- some 3D coordinate points may be occluded.
- the occluded points can be filled.
- network learning a complete human skeleton can be obtained. The effect will be better after using TOF, because the 2D RGB image has no depth perception.
- the input information is stronger and the accuracy can be improved.
- the controlled model in this example may be a game character in a game scene, a teacher model in an online education video in an online teaching scene, and a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
- the clothes of the teacher model may be more stable, such as suits.
- the controlled model may wear sports clothing.
- the key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs.
- the hand key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the knuckles, and the key points of the fingertips.
- the location of these key points can reflect the movement of the hands and fingers.
- Use the trunk quaternion to control the torso movement of the controlled model; use the limb quaternion to control the limb movement of the controlled model.
- the torso key points and the limb key points may include: 14 key points or 17 key points. There are 17 key points shown in Figure 8A.
- the controlled model in this example may be a game character in a game scene, a teacher model in an online education video in an online teaching scene, and a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
- the clothes of the teacher model may be more stable, such as suits.
- the controlled model may wear sports clothing.
- This example provides an image processing method.
- the steps of the method are as follows. Acquire an image, the image contains a target, and the target can be a human body. According to the image, a 3D posture of the target in a three-dimensional space is obtained, and the 3D posture can be represented by the 3D coordinates of the key points of the skeleton of the human body. Obtain the absolute rotation parameters of the joints of the human body in the camera coordinate system, and the absolute rotation position can be calculated from the coordinates in the camera coordinate system. According to the coordinates, the coordinate direction of the joint is obtained. According to the hierarchical relationship, the relative rotation parameters of the joints are determined.
- Determining the relative parameters may specifically include: determining the position of the key point of the joint relative to the root node of the human body.
- the relative rotation parameter can be used for quaternion representation.
- the hierarchical relationship here can be the traction relationship between joints. For example, the movement of the elbow joint will pull the movement of the wrist joint to a certain extent, and the movement of the shoulder joint will also pull the movement of the elbow joint.
- the hierarchical relationship may be predetermined according to the joints of the human body. Use this quaternion to control the rotation of the controlled model.
- the first level pelvis
- the second level waist
- the third level thigh (for example, left thigh, right thigh)
- fourth level calf (for example, left calf, right calf)
- fifth level feet.
- the first level chest; the second level: neck; the third level, head.
- the first level the clavicle, corresponding to the shoulder; the second level: the upper arm; the third level: the forearm (also called the forearm); the fourth level: the hand.
- the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
- the second type of motion information When determining the second type of motion information, first, obtain the motion information of the local key points of each level, and then determine the local motion of the low-level local key points relative to the high-level key points based on the hierarchical relationship Information (that is, the relative rotation information).
- the hierarchical relationship Information that is, the relative rotation information
- the relative rotation information can be expressed by the following formula (1): the rotation quaternion of each key point relative to the camera coordinate system ⁇ Q 0 ,Q 1 ,..., Q 18 ⁇ , and then calculate the rotation quaternion q i of each keypoint relative to the parent keypoint.
- the parent key point parent(i) is the key point one level above the current key point i.
- Q i is the rotation quaternion of the current key point i relative to the camera coordinate system; It is the reverse rotation parameter of the key point of the upper level.
- Q parent(i) is the rotation parameter of the key point of the upper level, and the rotation angle is 90 degrees; then The angle of rotation is -90 degrees.
- the aforementioned use of quaternions to control the motion of each joint of the controlled model may include: using q i to control the motion of each joint of the controlled model.
- the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within the constraint condition, Wherein, the constraint condition may be to limit the first Euler angle; obtain a quaternion corresponding to the second Euler angle, and then use the quaternion to control the rotation of the controlled model.
- the quaternion corresponding to the second Euler angle is obtained, and the second Euler angle can be directly converted into a quaternion.
- Fig. 8A is a skeleton diagram of 17 key points.
- Figure 8B is a skeleton diagram of 19 key points.
- Fig. 8A forms the skeleton diagram shown in Fig. 8B.
- the bones shown in Figure 8B can correspond to 19 key points, referring to the following bones: pelvis, waist, left thigh, left calf, left foot; right thigh, right calf, right foot, chest, neck, head, Left clavicle, right clavicle, right upper arm, right forearm, right hand, left upper arm, left forearm, left hand.
- the details can be as follows:
- (x i , y i , z i ) can be the coordinates of the i-th key point, and the value of i ranges from 0 to 16.
- p i represents the three-dimensional coordinates in the local coordinate system of node i, which are generally fixed values that come with the original model and do not need to be modified or migrated.
- q i is a quaternion, which represents the rotation of the bone controlled by node i in the coordinate system of its parent node. It can also be considered as the rotation of the local coordinate system of the current node and the local coordinate system of the parent node.
- the process of calculating the quaternion of the key points corresponding to each joint can be as follows: Determine the coordinate axis direction of the local coordinate system of each node. For each bone, the direction from the child node to the parent node is the x-axis; the vertical direction of the plane where the two bones connected by a node are located is the z-axis; if the rotation axis cannot be determined, the direction the human body faces is the y-axis ; Figure 9 shows the schematic diagram of the local coordinate system where node A is located.
- This example uses the left-handed coordinate system for illustration, and the right-handed coordinate system can also be used in specific implementation.
- (i-j) represents the vector that i points to j, and x represents the cross product.
- (1-7) represents the vector from the first key point to the seventh key point.
- nodes 8, 15, 11, and 18 are the four nodes of the hands and feet. Since the calculation of the quaternion of these four nodes requires specific postures to be determined, these four nodes are not included in the table. node.
- the number of the 19-point skeleton node can be seen in FIG. 8C; the key point number of the 17-point skeleton can be seen in FIG. 8A.
- the process of solving the first Euler angle is as follows. After calculating the local rotation quaternion q i of the joint points, first convert it to Euler angles, using the xyz order by default.
- Y asin(2*(q1*q3+q0*q2)) and the value of Y is between -1 and 1 (3)
- X is the Euler angle in the first direction
- Y is the Euler angle in the second direction
- Z is the Euler angle in the third direction. Any two of the first direction, the second direction, and the third direction are perpendicular.
- the method further includes: performing posture optimization adjustment on the second Euler angle. For example, to adjust some of the second Euler angles, it may be adjusted to the Euler angle optimized for posture based on preset rules, so as to obtain the third Euler angle.
- Obtaining the quaternion corresponding to the second Euler angle may include: converting the third Euler angle into a quaternion for controlling the controlled model.
- the method further includes: after converting the second Euler angles into a quaternion, performing posture optimization processing on the converted quaternion data. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
- the adjustment when adjusting the second Euler angle or the quaternion obtained by the conversion of the second Euler angle, the adjustment may be based on a preset rule, or may be optimized and adjusted by the deep learning model itself; There are many specific implementation methods, which are not limited in this application.
- pre-processing may also be included.
- the width of the crotch and/or shoulder of the controlled model is modified to correct the overall posture of the human body.
- the standing posture of the human body can be corrected for standing upright and for abdomen correction. Some people will push their abdomen when standing, and the abdomen correction can make the controlled model not simulate the user's abdomen movement. Some people hunch back when standing, and the hunchback correction can prevent the controlled model from simulating the user's hunchback.
- This example provides an image processing method.
- the steps of the method are as follows.
- An image is acquired, and the image includes a target, and the target may include at least one of a human body, a human upper limb, and a human lower limb.
- the coordinate system of the limb part that will pull the target joint movement is obtained.
- the rotation of the target joint relative to the limb part is determined to obtain the rotation parameter; the rotation parameter includes the spin parameter of the target joint and the rotation parameter traction by the limb part.
- the first angle limit is used to limit the rotation parameters of the local traction of the limbs, and the final traction rotation parameters are obtained; according to the final traction rotation parameters, the local rotation parameters of the limbs are adjusted.
- the second angle restriction is performed on the relative rotation parameter to obtain the restricted relative rotation parameter.
- the quaternion is obtained. The movement of the target joint of the controlled model is controlled according to the quaternion.
- the coordinate system of the hand in the image coordinate system is obtained, and the coordinate system of the forearm and the coordinate system of the upper arm are obtained.
- the target joint at this time is the wrist joint.
- the rotation of the hand relative to the forearm is broken down into spin and pulled rotation. Transfer the towed rotation to the forearm, specifically, assign the towed rotation to the rotation in the corresponding direction of the forearm; use the first angle limit of the forearm to limit the maximum rotation of the forearm. Then determine the rotation of the hand relative to the corrected forearm, and obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the hand relative to the forearm.
- the coordinate system of the foot under the image coordinate system is obtained, and the coordinate system of the lower leg and the coordinate system of the thigh are obtained; the target joint at this time is the ankle joint.
- the rotation of the foot relative to the calf is broken down into spin and pulled rotation. Transfer the pulled rotation to the lower leg, specifically, assign the pulled rotation to the rotation in the corresponding direction of the lower leg; use the first angle limit of the lower leg to limit the maximum rotation of the lower leg. Then determine the rotation of the foot relative to the corrected calf to obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the foot relative to the calf.
- an embodiment of the present application provides an image device, including: a memory 1030, configured to store information; a processor 1020, connected to the memory 1030, configured to execute data stored on the memory 10330
- the computer-executable instructions can implement the image processing methods provided by one or more of the foregoing technical solutions, for example, the image processing methods shown in FIG. 1, FIG. 2 and/or FIG. 4.
- the memory 1030 may be various types of memory, such as random access memory, read-only memory, flash memory, etc.
- the memory 130 may be used for information storage, for example, storing computer executable instructions and the like.
- the computer-executable instructions may be various program instructions, for example, target program instructions and/or source program instructions.
- the processor 1020 may be various types of processors, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
- the processor 1020 may be connected to the memory 1030 through a bus.
- the bus may be an integrated circuit bus or the like.
- the terminal device may further include: a communication interface 1040, and the communication interface 1040 may include: a network interface, for example, a local area network interface, a transceiver antenna, and the like.
- the communication interface 1040 is also connected to the processor 1020 and can be used for information transmission and reception.
- the terminal device further includes a human-computer interaction interface 1050.
- the human-computer interaction interface 1050 may include various input and output devices, such as a keyboard and a touch screen.
- the imaging device further includes a display 1010, which can display various prompts, collected facial images, and/or various interfaces.
- the embodiment of the present application provides a non-volatile computer storage medium that stores computer executable code; after the computer executable code is executed, the image provided by one or more technical solutions can be realized
- the processing method is, for example, the image processing method shown in FIG. 1, FIG. 2 and/or FIG. 4.
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
- the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms of.
- the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units; Some or all of the units can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the embodiments of the present application can be all integrated into one processing module, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
- the unit of can be implemented in the form of hardware, or in the form of hardware plus software functional units.
- the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a magnetic disk or an optical disk and other media that can store program codes.
- ROM Read-Only Memory
Abstract
Description
Claims (44)
- 一种图像处理方法,其特征在于,包括:获取图像;基于所述图像,获取身体的肢体的特征,其中,所述肢体包括:上肢和/或下肢;基于所述特征,确定所述肢体的第一类运动信息;根据所述第一类运动信息,控制受控模型的所述肢体的运动。
- 根据权利要求1所述的方法,其特征在于,基于所述特征,确定所述肢体的所述第一类运动信息,包括:检测所述图像中所述肢体的关键点的位置信息;根据所述位置信息,确定所述第一类运动信息。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:检测所述图像中身体骨架的关键点的位置信息;基于所述图像,获取所述身体的所述肢体的所述特征,包括:基于所述身体骨架的关键点的所述位置信息,确定所述肢体的关键点的位置信息。
- 根据权利要求3所述的方法,其特征在于,根据所述位置信息,确定所述第一类运动信息,包括:根据所述肢体的关键点的所述位置信息,在所述图像中确定出包含所述肢体的第一局部的位置框;基于所述位置框,检测所述第一局部的关键点的位置信息;基于所述第一局部的关键点的所述位置信息,获得所述第一局部的第一类运动信息。
- 根据权利要求4所述的方法,其特征在于,根据所述肢体的关键点的所述位置信息,在所述图像中确定出包含所述肢体的第一局部的所述位置框,包括:根据手部关键点的位置信息,在所述图像中确定出包含手部的位置框。
- 根据权利要求5所述的方法,其特征在于,基于所述位置框,检测所述第一局部的关键点的所述位置信息,包括:基于所述位置框,检测所述手部上的指关节所对应关键点的位置信息和/或手指指尖所对应关键点的位置信息。
- 根据权利要求4至6中任一项所述的方法,其特征在于,基于所述第一局部的关键点的所述位置信息,获得所述第一局部的所述第一类运动信息,包括:基于所述第一局部的关键点的所述位置信息,获得手部的手指的运动信息。
- 根据权利要求3至7任一项所述的方法,其特征在于,基于所述特征,确定所述肢体的所述第一类运动信息,还包括:根据所述肢体的关键点的所述位置信息,确定出所述肢体的第二局部的第一类运动信息。
- 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:基于所述肢体中通过连接部所连接的两个局部的关键点的位置信息,确定出所述连接部的第二类运动信息。
- 根据权利要求9所述的方法,其特征在于,所述方法还包括:根据至少两个局部的特征及所述连接部的第一运动约束条件,确定所述连接部的第二类运动信息,其中,所述至少两个局部包括所述连接部所连接的所述两个局部;根据所述第二类运动信息,控制所述受控模型的所述连接部的运动。
- 根据权利要求10所述的方法,其特征在于,根据所述第二类运动信息,控制所述受控模型的所述连接部的运动,包括:根据所述连接部的类型,确定控制所述连接部的控制方式;根据所述控制方式及所述第二类运动信息,控制所述受控模型的所述连接部的运动。
- 根据权利要求11所述的方法,其特征在于,根据所述连接部的所述类型,确定控制所述连接部的所述控制方式,包括:在所述连接部为第一类连接部的情况下,确定所述控制方式为第一类控制方式,其中,所述第一类控制方式,用于直接控制所述受控模型中与所述第一类连接部对应的连接部的运动。
- 根据权利要求11所述的方法,其特征在于,根据所述连接部的类型,确定控制所述连接部的所述控制方式,包括:在所述连接部为第二类连接部的情况下,确定所述控制方式为第二类控制方式,其中,所述第二类控制方式,用于间接控制所述受控模型中与所述第二类连接部的对应的连接部的运动,该间接控制是通过控制所述受控模型中对应于所述第二类连接部以外的局部的局部实现的。
- 根据权利要求13所述的方法,其特征在于,根据所述控制方式及所述第二类运动信息,控制所述受控模型的所述连接部的运动,包括:在所述控制方式为所述第二类控制方式的情况下,分解所述第二类运动信息,以获得所述连接部由牵引部牵引所述连接部旋转的第一类旋转信息;根据所述第一类旋转信息,调整所述牵引部的运动信息;利用调整后的所述牵引部的运动信息,控制所述受控模型中牵引部的运动,以间接控制所述连接部的运动。
- 根据权利要求14所述的方法,其特征在于,所述方法还包括:分解所述第二类运动信息,以获得所述第二类连接部相对于所述牵引部旋转的第二类旋转信息;利用所述第二类旋转信息,控制所述受控模型所述连接部相对于所述牵引部的旋转。
- 根据权利要求13至15任一项所述的方法,其特征在于,所述第一类连接部包括:手肘;膝盖;所述第二类连接部包括:手腕;脚腕。
- 根据权利要求13至16任一项所述的方法,其特征在于,在所述第二类连接部为手腕的情况下,对应于所述手腕的牵引部包括:上臂和/或前臂;在所述第二类连接部为脚腕的情况下,对应于所述脚腕的牵引部包括:小腿和/或大腿。
- 根据权利要求1至17任一项所述的方法,其特征在于,基于所述图像,获取所述身体的所述肢体的所述特征,包括:基于2D图像,获取所述肢体的第一2D坐标;基于所述特征,确定所述肢体的所述第一类运动信息,包括:基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。
- 根据权利要求1至17任一项所述的方法,其特征在于,基于所述图像,获取所述身体的所述肢体的所述特征,包括:基于3D图像,获取所述肢体的骨架关键点的第二3D坐标;基于所述特征,确定所述肢体的所述第一类运动信息,包括:基于所述第二3D坐标获得第三3D坐标。
- 根据权利要求19所述的方法,其特征在于,所述基于所述第二3D坐标获得第三3D坐标, 包括:基于所述第二3D坐标,调整所述肢体在所述3D图像中被遮挡部分所对应骨架关键点的3D坐标,从而获得所述第三3D坐标。
- 根据权利要求1至20任一项所述的方法,其特征在于,所述第一类运动信息包括:四元数。
- 一种图像处理装置,其特征在于,包括:第一获取模块,用于获取图像;第二获取模块,用于基于所述图像,获取身体的肢体的特征,其中,所述肢体包括:上肢和/或下肢;第一确定模块,用于基于所述特征,确定所述肢体的第一类运动信息;控制模块,用于根据所述第一类运动信息,控制受控模型的所述肢体的运动。
- 根据权利要求22所述的装置,其特征在于,所述第一确定模块,具体用于检测所述图像中所述肢体的关键点的位置信息;根据所述位置信息,确定所述第一类运动信息。
- 根据权利要求23所述的装置,其特征在于,所述装置还包括:检测模块,用于检测所述图像中身体骨架的关键点的位置信息;所述第二获取模块,具体用于基于所述身体骨架的关键点的所述位置信息,确定所述肢体的关键点的位置信息。
- 根据权利要求24所述的装置,其特征在于,所述第一确定模块,具体用于:根据所述肢体的关键点的所述位置信息,在所述图像中确定出包含所述肢体的第一局部的位置框;基于所述位置框,检测所述第一局部的关键点的位置信息;基于所述第一局部的关键点的所述位置信息,获得所述第一局部的第一类运动信息。
- 根据权利要求25所述的装置,其特征在于,所述第一确定模块,具体用于根据手部关键点的位置信息,在所述图像中确定出包含手部的位置框。
- 根据权利要求26所述的装置,其特征在于,所述第一确定模块,还用于基于所述位置框,检测所述手部上的指关节所对应关键点的位置信息和/或手指指尖所对应关键点的位置信息。
- 根据权利要求25至27任一项所述的装置,其特征在于,所述第一确定模块,还用于基于所述第一局部的关键点的所述位置信息,获得手部的手指的运动信息。
- 根据权利要求24至28任一项所述的装置,其特征在于,所述第一确定模块,还用于根据所述肢体的关键点的所述位置信息,确定出所述肢体的第二局部的第一类运动信息。
- 根据权利要求22至29任一项所述的装置,其特征在于,所述装置还包括:第二确定模块,用于基于所述肢体中通过连接部所连接的两个局部的关键点的位置信息,确定出所述连接部的第二类运动信息。
- 根据权利要求30所述的装置,其特征在于,所述装置还包括:连接部控制模块,用于根据所述至少两个局部的特征及连接部的第一运动约束条件,确定所述至少两个局部的连接部的第二类运动信息;根据所述第二类运动信息,控制所述受控模型的所述连接部的运动。
- 根据权利要求31所述的装置,其特征在于,所述连接部控制模块,还用于:根据所述连接部的类型,确定控制所述连接部的控制方式;根据所述控制方式及所述第二类运动信息,控制所述受控模型的所述连接部的运动。
- 根据权利要求31所述的装置,其特征在于,所述连接部控制模块,还用于在所述连接部为 第一类连接部的情况下,确定所述控制方式为第一类控制方式,其中,所述第一类控制方式,用于直接控制所述受控模型中与所述第一类连接部对应的连接部的运动。
- 根据权利要求31所述的装置,其特征在于,所述连接部控制模块,还用于在所述连接部为第二类连接部的情况下,确定所述控制方式为第二类控制方式,其中,所述第二类控制方式,用于间接控制所述受控模型中与所述第二类连接部的对应的连接部的运动,该间接控制是通过控制所述受控模型中对应于所述第二类连接部以外的局部的局部实现的。
- 根据权利要求34所述的装置,其特征在于,所述连接部控制模块,还用于:在所述控制方式为所述第二类控制方式的情况下,分解所述第二类运动信息,以获得所述连接部由牵引部牵引所述连接部旋转的第一类旋转信息;根据所述第一类旋转信息,调整所述牵引部的运动信息;利用调整后的所述牵引部的运动信息,控制所述受控模型中牵引部的运动,以间接控制所述连接部的运动。
- 根据权利要求35所述的装置,其特征在于,所述连接部控制模块,还用于:分解所述第二类运动信息,以获得所述第二类连接部相对于所述牵引部旋转的第二类旋转信息;利用所述第二类旋转信息,控制所述受控模型所述连接部相对于所述牵引部的旋转。
- 根据权利要求34至36任一项所述的装置,其特征在于,所述第一类连接部包括:手肘;膝盖;所述第二类连接部包括:手腕;脚腕。
- 根据权利要求34至37任一项所述的装置,其特征在于,若所述第二类连接部为手腕,对应于所述手腕的牵引部包括:上臂和/或前臂;若所述第二类连接部为脚腕,对应于所述脚腕的牵引部包括:小腿和/或大腿。
- 根据权利要求22至38任一项所述的装置,其特征在于,所述第二获取模块,具体用于基于2D图像,获取所述肢体的第一2D坐标;所述第一确定模块,用于基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。
- 根据权利要求22至39任一项所述的装置,其特征在于,所述第二获取模块,具体用于基于3D图像,获取所述肢体的骨架关键点的第二3D坐标;所述第一确定模块,具体用于基于所述第二3D坐标获得第三3D坐标。
- 根据权利要求40所述的装置,其特征在于,所述第一确定模块,具体用于基于所述第二3D坐标,调整所述肢体在所述3D图像中被遮挡部分所对应骨架关键点的3D坐标,从而获得所述第三3D坐标。
- 根据权利要求22至41任一项所述的装置,其特征在于,所述第一类运动信息包括:四元数。
- 一种图像设备,其特征在于,包括:存储器;处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,以实现上述权利要求1至21任一项提供的方法。
- 一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行后,能够实现上述权利要求1至21任一项提供的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202011595QA SG11202011595QA (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device, and storage medium |
KR1020207036649A KR20210011425A (ko) | 2019-01-18 | 2020-01-16 | 이미지 처리 방법 및 디바이스, 이미지 장치, 및 저장 매체 |
JP2020565269A JP7061694B2 (ja) | 2019-01-18 | 2020-01-16 | 画像処理方法および装置、画像機器、ならびに記憶媒体 |
US17/102,305 US11741629B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of model derived from captured image |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049830.6 | 2019-01-18 | ||
CN201910049830 | 2019-01-18 | ||
CN201910365188.2 | 2019-04-30 | ||
CN201910365188.2A CN111460875B (zh) | 2019-01-18 | 2019-04-30 | 图像处理方法及装置、图像设备及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/102,305 Continuation US11741629B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of model derived from captured image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020147791A1 true WO2020147791A1 (zh) | 2020-07-23 |
Family
ID=71614424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/072520 WO2020147791A1 (zh) | 2019-01-18 | 2020-01-16 | 图像处理方法及装置、图像设备及存储介质 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020147791A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113058260A (zh) * | 2021-04-22 | 2021-07-02 | 杭州当贝网络科技有限公司 | 基于玩家画像的体感动作识别方法、系统及存储介质 |
CN113065482A (zh) * | 2021-04-09 | 2021-07-02 | 上海云从企业发展有限公司 | 基于图像识别的行为检测方法、系统、计算机设备及介质 |
CN113076903A (zh) * | 2021-04-14 | 2021-07-06 | 上海云从企业发展有限公司 | 一种目标行为检测方法、系统、计算机设备及机器可读介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102824176A (zh) * | 2012-09-24 | 2012-12-19 | 南通大学 | 一种基于Kinect传感器的上肢关节活动度测量方法 |
US20130195330A1 (en) * | 2012-01-31 | 2013-08-01 | Electronics And Telecommunications Research Institute | Apparatus and method for estimating joint structure of human body |
CN106251396A (zh) * | 2016-07-29 | 2016-12-21 | 迈吉客科技(北京)有限公司 | 三维模型的实时控制方法和系统 |
CN108227931A (zh) * | 2018-01-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | 用于控制虚拟人物的方法、设备、系统、程序和存储介质 |
CN108305321A (zh) * | 2018-02-11 | 2018-07-20 | 谢符宝 | 一种基于双目彩色成像系统的立体人手3d骨架模型实时重建方法和装置 |
CN109035415A (zh) * | 2018-07-03 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | 虚拟模型的处理方法、装置、设备和计算机可读存储介质 |
-
2020
- 2020-01-16 WO PCT/CN2020/072520 patent/WO2020147791A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130195330A1 (en) * | 2012-01-31 | 2013-08-01 | Electronics And Telecommunications Research Institute | Apparatus and method for estimating joint structure of human body |
CN102824176A (zh) * | 2012-09-24 | 2012-12-19 | 南通大学 | 一种基于Kinect传感器的上肢关节活动度测量方法 |
CN106251396A (zh) * | 2016-07-29 | 2016-12-21 | 迈吉客科技(北京)有限公司 | 三维模型的实时控制方法和系统 |
CN108227931A (zh) * | 2018-01-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | 用于控制虚拟人物的方法、设备、系统、程序和存储介质 |
CN108305321A (zh) * | 2018-02-11 | 2018-07-20 | 谢符宝 | 一种基于双目彩色成像系统的立体人手3d骨架模型实时重建方法和装置 |
CN109035415A (zh) * | 2018-07-03 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | 虚拟模型的处理方法、装置、设备和计算机可读存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065482A (zh) * | 2021-04-09 | 2021-07-02 | 上海云从企业发展有限公司 | 基于图像识别的行为检测方法、系统、计算机设备及介质 |
CN113076903A (zh) * | 2021-04-14 | 2021-07-06 | 上海云从企业发展有限公司 | 一种目标行为检测方法、系统、计算机设备及机器可读介质 |
CN113058260A (zh) * | 2021-04-22 | 2021-07-02 | 杭州当贝网络科技有限公司 | 基于玩家画像的体感动作识别方法、系统及存储介质 |
CN113058260B (zh) * | 2021-04-22 | 2024-02-02 | 杭州当贝网络科技有限公司 | 基于玩家画像的体感动作识别方法、系统及存储介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111460875B (zh) | 图像处理方法及装置、图像设备及存储介质 | |
CN109636831B (zh) | 一种估计三维人体姿态及手部信息的方法 | |
CN105389539B (zh) | 一种基于深度数据的三维手势姿态估计方法及系统 | |
WO2020147791A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
WO2020147796A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
JP2022503776A (ja) | 視覚ディスプレイの補完的なデータを生成するためのシステム及び方法 | |
WO2023071964A1 (zh) | 数据处理方法, 装置, 电子设备及计算机可读存储介质 | |
JP2019096113A (ja) | キーポイントデータに関する加工装置、方法及びプログラム | |
CN106815855A (zh) | 基于产生式和判别式结合的人体运动跟踪方法 | |
WO2020147797A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
Klein et al. | A markeless augmented reality tracking for enhancing the user interaction during virtual rehabilitation | |
Baek et al. | Dance experience system using multiple kinects | |
TWI736083B (zh) | 動作預測的方法及系統 | |
Jatesiktat et al. | Personalized markerless upper-body tracking with a depth camera and wrist-worn inertial measurement units | |
WO2020147794A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
Liu et al. | Skeleton tracking based on Kinect camera and the application in virtual reality system | |
CN115530814A (zh) | 一种基于视觉姿态检测及计算机深度学习的儿童运动康复训练方法 | |
Pavitra et al. | Deep learning-based yoga learning application | |
CN112837339A (zh) | 基于运动捕捉技术的轨迹绘制方法及装置 | |
Su et al. | Estimating human pose with both physical and physiological constraints | |
JP7482471B2 (ja) | 学習モデルの生成方法 | |
Asokan et al. | IoT based Pose detection of patients in Rehabilitation Centre by PoseNet Estimation Control | |
JP2021099666A (ja) | 学習モデルの生成方法 | |
CN113842622A (zh) | 一种运动教学方法、装置、系统、电子设备及存储介质 | |
CN113673494A (zh) | 人体姿态标准运动行为匹配方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20741003 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020565269 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207036649 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20741003 Country of ref document: EP Kind code of ref document: A1 |