CN111062276A - Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment - Google Patents

Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment Download PDF

Info

Publication number
CN111062276A
CN111062276A CN201911218565.6A CN201911218565A CN111062276A CN 111062276 A CN111062276 A CN 111062276A CN 201911218565 A CN201911218565 A CN 201911218565A CN 111062276 A CN111062276 A CN 111062276A
Authority
CN
China
Prior art keywords
posture
human
target
human body
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911218565.6A
Other languages
Chinese (zh)
Inventor
姚志强
周曦
吴媛
吴大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jize Technology Co Ltd
Yuncong Technology Group Co Ltd
Original Assignee
Guangzhou Jize Technology Co Ltd
Yuncong Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jize Technology Co Ltd, Yuncong Technology Group Co Ltd filed Critical Guangzhou Jize Technology Co Ltd
Priority to CN201911218565.6A priority Critical patent/CN111062276A/en
Publication of CN111062276A publication Critical patent/CN111062276A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a human body posture recommendation method based on human-computer interaction, which comprises the following steps: identifying a target portrait in a shooting visual field; extracting key points of the human body based on the target portrait; and acquiring one or more target posture templates matched with the postures of the target portrait from a posture library based on the human body key points. The invention can automatically detect the human body posture, intelligently recommend a plurality of photographing postures according to the initial posture of the portrait, and display a corresponding personalized human body posture guide frame on the display equipment, so that the user can adjust the body through visual indication on the basis of the initial posture to obtain a proper posture.

Description

Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment
Technical Field
The invention belongs to the field of image processing, and particularly relates to a human body posture recommendation method and device based on human-computer interaction, a machine readable medium and equipment.
Background
When taking a picture or taking a video, the person to be photographed generally wants to make the body take a proper posture, which is commonly called a pendulum position. These gestures are either natural or funny and are a way of expressing emotion. However, because there is no reference object in the body posture adjustment process, the photographer who lacks the photographing experience may only present a stiff, unnatural and awkward posture when starting to photograph, and it takes a lot of time to perform multiple adjustments.
In a conventional method, in a standard-posture still photograph shooting scene, such as a certificate photograph shooting, a human body posture guide frame is generally displayed on a display device to indicate a shooting position and a shooting posture that a person to be shot should have. With the introduction of the human posture guide frame, the difference between the self posture and the guide frame can be visually displayed to the shot person, so that the self posture can be actively adjusted by a target to finish the shooting of the standard posture.
However, such a human posture guidance frame is poor in versatility, supports only a fixed standard posture, and also lacks human posture guidance for video shooting.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a human body posture recommendation method, device, machine-readable medium and apparatus based on human-computer interaction, which are used to solve the problems in the prior art.
In order to achieve the above objects and other related objects, the present invention provides a human body posture recommendation method based on human-computer interaction, including:
identifying a target portrait in a shooting visual field;
extracting key points of the human body based on the target portrait;
and acquiring one or more target posture templates matched with the postures of the target portrait from a posture library based on the human body key points.
Optionally, the recommendation method further includes:
displaying the one or more target pose templates.
Optionally, the gesture template comprises an example graph.
Optionally, the gesture template further comprises a human gesture guide frame, and the example graph is located in the human gesture guide frame.
Optionally, the human body key points include: the vertex, five sense organs, neck, four limbs.
Optionally, obtaining one or more target pose templates matching the pose of the target portrait from a pose library based on the human key points includes:
key point alignment is carried out on the human body key points corresponding to the posture template and the extracted human body key points;
calculating Euclidean distances of each pair of aligned key points, wherein the sum of all the Euclidean distances is the attitude deviation between the attitude template and the target portrait;
and selecting one or more posture templates with the posture deviation smaller than the preset deviation value as one or more target posture templates matched with the postures of the target portrait.
To achieve the above and other related objects, the present invention provides a human body posture recommendation device based on human-computer interaction, comprising:
the image acquisition module is used for identifying a target portrait in a shooting visual field;
the key point extraction module is used for extracting key points of a human body based on the target portrait;
and the matching module is used for acquiring one or more target posture templates matched with the postures of the target portrait from a posture library based on the human body key points. .
Optionally, the recommendation device further includes:
a display module to display the one or more target pose templates.
Optionally, the gesture template comprises an example graph.
Optionally, the gesture template further comprises a human gesture guide frame, and the example graph is located in the human gesture guide frame.
Optionally, the human body key points include: the vertex, five sense organs, neck, four limbs.
Optionally, obtaining one or more target pose templates matching the pose of the target portrait from a pose library based on the human key points includes:
key point alignment is carried out on the human body key points corresponding to the posture template and the extracted human body key points;
calculating Euclidean distances of each pair of aligned key points, wherein the sum of all the Euclidean distances is the attitude deviation between the attitude template and the target portrait;
and selecting one or more posture templates with the posture deviation smaller than the preset deviation value as one or more target posture templates matched with the postures of the target portrait.
To achieve the foregoing and other related objectives, the present invention provides one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform one or more of the methods described above.
To achieve the above and other related objects, the present invention provides an apparatus comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform one or more of the methods described previously.
As described above, the human body posture recommendation method, device, machine readable medium and apparatus based on human-computer interaction provided by the invention have the following beneficial effects:
the invention can automatically detect the human body posture, intelligently recommend a plurality of photographing postures according to the initial posture of the portrait, and display a corresponding personalized human body posture guide frame on the display equipment, so that the user can adjust the body through visual indication on the basis of the initial posture to obtain a proper posture.
Drawings
FIG. 1 is a flowchart of a human body posture recommendation method based on human-computer interaction according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a method for defining key points of a human body according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a human body posture guidance frame according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a human body posture recommendation device based on human-computer interaction according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, a human body posture recommendation method based on human-computer interaction includes:
s11 identifying the target portrait in the shooting visual field;
and identifying a target portrait appearing in the shooting visual field through a human body detection algorithm, and calculating a circumscribed rectangular frame of the portrait. The position of the target portrait can be confirmed through the circumscribed rectangle frame. The human body detection algorithm includes, but is not limited to, HOG + AdaBoost and Faster-RCNN algorithm.
S12 extracting key points of the human body based on the target portrait;
and extracting human key points from the target portrait by a human key point detection algorithm. The key points of the human body can include the positions of the main joints such as the top of the head, the five sense organs, the neck, the four limbs and the like. The human body key point detection algorithm includes, but is not limited to, G-RMI, CFN (full convolution network).
S13, one or more target pose templates matching the pose of the target portrait are obtained from a pose library based on the human key points.
The gesture library stores a plurality of gesture templates, which can be template photos carried by the terminal device or photos in a photo library shot by the terminal device. For example, a user may set a series of template photographs of different expressions, e.g., lovely, skintone, funny, and selling, each type may have many different templates. The terminal equipment is used for selecting one or more photos from the gesture library as gesture templates, and acquiring a target gesture template selected by a user.
In an embodiment, the recommendation method further comprises displaying one or more target pose templates. The user can adjust the self photographing gesture according to the displayed one or more target gesture templates.
In one embodiment, the gesture template includes a human gesture guiding frame, and an exemplary diagram is disposed in the human gesture guiding frame, as shown in fig. 3. The human body posture guide frame is used for indicating the photographing position and the photographing posture which the photographed person should have, and can visually show the difference between the self posture and the guide frame to the photographed person, so that the self posture can be dynamically adjusted with a target, and the photographing in the standard posture can be completed.
In one embodiment, obtaining one or more target pose templates matching the pose of the target portrait from a pose library based on the human key points comprises:
s131, aligning key points of the human body key points corresponding to the posture template with the extracted human body key points;
specifically, a Procrustes algorithm is adopted to align key points of a human body corresponding to the posture template with the extracted key points of the human body;
s132, calculating Euclidean distances of each pair of aligned key points, wherein the sum of all the Euclidean distances is the attitude deviation between the attitude template and the target portrait;
s133 selects one or more pose templates having a pose deviation smaller than the preset deviation value as one or more target pose templates matched with the pose of the target portrait.
In the process of selecting the attitude template, the attitude deviations can be sequenced from small to large, the templates corresponding to the first N attitude deviations are selected as target attitude templates, and N can be set according to actual conditions.
It can be understood that, when the posture deviation is larger, the difference between the posture of the portrait in the target posture template and the posture of the portrait of the user is considered to be larger; and when the posture deviation is smaller, the difference between the posture of the portrait in the target posture template and the posture of the portrait of the user is considered to be smaller.
The invention can automatically detect the human body posture, intelligently recommend a plurality of photographing postures according to the initial posture of the portrait, and display a corresponding personalized human body posture guide frame on the display equipment, so that the user can adjust the body through visual indication on the basis of the initial posture to obtain a proper posture.
As shown in fig. 4, a human posture recommendation device based on human-computer interaction includes: the system comprises an image acquisition module 11, an image acquisition module 12 and a matching module 13;
the image acquisition module is used for identifying a target portrait in a shooting visual field;
and identifying a target portrait appearing in the shooting visual field through a human body detection algorithm, and calculating a circumscribed rectangular frame of the portrait. The position of the target portrait can be confirmed through the circumscribed rectangle frame. The human body detection algorithm includes, but is not limited to, HOG + AdaBoost and Faster-RCNN algorithm.
The image acquisition module extracts key points of a human body based on the target portrait;
and extracting human key points from the target portrait by a human key point detection algorithm. The key points of the human body can include the positions of the main joints such as the head, the five sense organs, the neck, the four limbs and the like. The human body key point detection algorithm includes, but is not limited to, G-RMI, CFN (full convolution network).
The matching module is used for acquiring one or more target posture templates matched with the postures of the target portrait from a posture library based on the human body key points. .
The gesture library stores a plurality of gesture templates, which can be template photos carried by the terminal device or photos in a photo library taken by the terminal device. For example, a user may set a series of template photographs of different expressions, e.g., lovely, skintone, funny, and selling, each type may have many different templates. The terminal equipment is used for selecting one or more photos from the gesture library as gesture templates, and acquiring a target gesture template selected by a user.
In one embodiment, the recommendation apparatus further comprises:
a display module to display one or more target pose templates. The user can adjust the self photographing gesture according to the displayed one or more target gesture templates.
In an embodiment, the gesture template includes an example graph. The gesture template further comprises a human gesture guide frame, and the example graph is located in the human gesture guide frame.
As shown in fig. 3. The human body posture guide frame is used for indicating the photographing position and the photographing posture which the photographed person should have, and can visually show the difference between the self posture and the guide frame to the photographed person, so that the self posture can be adjusted in a targeted manner, and the photographing in the standard posture can be completed.
In an embodiment, optionally, obtaining one or more target pose templates matching the pose of the target portrait from a pose library based on the human key points includes:
key point alignment is carried out on the human body key points corresponding to the posture template and the extracted human body key points;
specifically, a Procrustes algorithm is adopted to align key points of a human body corresponding to the posture template with the extracted key points of the human body;
calculating Euclidean distances of each pair of aligned key points, wherein the sum of all the Euclidean distances is the attitude deviation between the attitude template and the target portrait;
and selecting one or more posture templates with the posture deviation smaller than the preset deviation value as one or more target posture templates matched with the postures of the target portrait.
In the process of selecting the attitude template, the attitude deviations can be sequenced from small to large, the templates corresponding to the first N attitude deviations are selected as target attitude templates, and N can be set according to actual conditions.
It can be understood that, when the posture deviation is larger, the difference between the posture of the portrait in the target posture template and the posture of the portrait of the user is considered to be larger; and when the posture deviation is smaller, the difference between the posture of the portrait in the target posture template and the posture of the portrait of the user is considered to be smaller.
An embodiment of the present application further provides an apparatus, which may include: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method of fig. 1. In practical applications, the device may be used as a terminal device, and may also be used as a server, where examples of the terminal device may include: the mobile terminal includes a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a vehicle-mounted computer, a desktop computer, a set-top box, an intelligent television, a wearable device, and the like.
The present embodiment also provides a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a device, the device may be caused to execute instructions (instructions) of steps included in the face recognition method in fig. 1 according to the present embodiment.
Fig. 5 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown, the terminal device may include: an input device 1100, a first processor 1101, an output device 1102, a first memory 1103, and at least one communication bus 1104. The communication bus 1104 is used to implement communication connections between the elements. The first memory 1103 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, and the first memory 1103 may store various programs for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the first processor 1101 may be, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the first processor 1101 is coupled to the input device 1100 and the output device 1102 through a wired or wireless connection.
Optionally, the input device 1100 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; the output devices 1102 may include output devices such as a display, audio, and the like.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the face recognition apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 6 is a schematic hardware structure diagram of a terminal device according to an embodiment of the present application. FIG. 6 is a specific embodiment of the implementation of FIG. 5. As shown, the terminal device of the present embodiment may include a second processor 1201 and a second memory 1202.
The second processor 1201 executes the computer program code stored in the second memory 1202 to implement the method described in fig. 5 in the above embodiment.
The second memory 1202 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The second memory 1202 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a second processor 1201 is provided in the processing assembly 1200. The terminal device may further include: communication component 1203, power component 1204, multimedia component 1205, speech component 1206, input/output interfaces 1207, and/or sensor component 1208. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 1200 generally controls the overall operation of the terminal device. The processing assembly 1200 may include one or more second processors 1201 to execute instructions to perform all or part of the steps of the data processing method described above. Further, the processing component 1200 can include one or more modules that facilitate interaction between the processing component 1200 and other components. For example, the processing component 1200 can include a multimedia module to facilitate interaction between the multimedia component 1205 and the processing component 1200.
The power supply component 1204 provides power to the various components of the terminal device. The power components 1204 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia components 1205 include a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The voice component 1206 is configured to output and/or input voice signals. For example, the voice component 1206 includes a Microphone (MIC) configured to receive external voice signals when the terminal device is in an operational mode, such as a voice recognition mode. The received speech signal may further be stored in the second memory 1202 or transmitted via the communication component 1203. In some embodiments, the speech component 1206 further comprises a speaker for outputting speech signals.
The input/output interface 1207 provides an interface between the processing component 1200 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor component 1208 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor component 1208 may detect an open/closed state of the terminal device, relative positioning of the components, presence or absence of user contact with the terminal device. The sensor assembly 1208 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 1208 may also include a camera or the like.
The communication component 1203 is configured to facilitate communications between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot therein for inserting a SIM card therein, so that the terminal device may log onto a GPRS network to establish communication with the server via the internet.
As can be seen from the above, the communication component 1203, the voice component 1206, the input/output interface 1207 and the sensor component 1208 referred to in the embodiment of fig. 6 can be implemented as the input device in the embodiment of fig. 5.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (14)

1. A human body posture recommendation method based on human-computer interaction is characterized by comprising the following steps:
identifying a target portrait in a shooting visual field;
extracting key points of the human body based on the target portrait;
and acquiring one or more target posture templates matched with the postures of the target portrait from a posture library based on the human body key points.
2. The human body posture recommendation method based on human-computer interaction according to claim 1, characterized in that the recommendation method further comprises:
displaying the one or more target pose templates.
3. The human-computer interaction-based human posture recommendation method of claim 1, wherein the target posture template comprises an example graph.
4. The human-computer interaction based human posture recommendation method of claim 3, wherein the target posture template further comprises a human posture guide frame, and the example graph is located in the human posture guide frame.
5. The human body posture recommendation method based on human-computer interaction of claim 1, wherein the human body key points comprise: the vertex, five sense organs, neck, four limbs.
6. The human body posture recommendation method based on human body interaction according to claim 1, wherein obtaining one or more target posture templates matching the postures of the target portrait from a posture library based on the human body key points comprises:
key point alignment is carried out on the human body key points corresponding to the posture template and the extracted human body key points;
calculating Euclidean distances of each pair of aligned key points, wherein the sum of all the Euclidean distances is the attitude deviation between the attitude template and the target portrait;
and selecting one or more posture templates with the posture deviation smaller than the preset deviation value as one or more target posture templates matched with the postures of the target portrait.
7. A human posture recommendation device based on human-computer interaction is characterized by comprising:
the image acquisition module is used for identifying a target portrait in a shooting visual field;
the key point extraction module is used for extracting key points of a human body based on the target portrait;
and the matching module is used for acquiring one or more target posture templates matched with the postures of the target portrait from a posture library based on the human body key points.
8. The human-computer interaction-based human posture recommendation device of claim 7, further comprising:
a display module to display the one or more target pose templates.
9. The human-computer interaction-based human gesture recommendation device of claim 7, wherein the gesture template comprises an example graph.
10. The human-computer interaction based human posture recommendation device of claim 9, wherein the posture template further comprises a human posture guide frame, and the example graph is located in the human posture guide frame.
11. The human-computer interaction-based human posture recommendation device of claim 7, wherein the human key points comprise: the vertex, five sense organs, neck, four limbs.
12. The human-computer interaction based human posture recommendation device of claim 7, wherein obtaining one or more target posture templates matching the postures of the target portrait from a posture library based on the human key points comprises:
key point alignment is carried out on the human body key points corresponding to the posture template and the extracted human body key points;
calculating Euclidean distances of each pair of aligned key points, wherein the sum of all the Euclidean distances is the attitude deviation between the attitude template and the target portrait;
and selecting one or more posture templates with the posture deviation smaller than the preset deviation value as one or more target posture templates matched with the postures of the target portrait.
13. An apparatus, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method recited in one or more of claims 1-6.
14. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method of one or more of claims 1-6.
CN201911218565.6A 2019-12-03 2019-12-03 Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment Pending CN111062276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911218565.6A CN111062276A (en) 2019-12-03 2019-12-03 Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911218565.6A CN111062276A (en) 2019-12-03 2019-12-03 Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment

Publications (1)

Publication Number Publication Date
CN111062276A true CN111062276A (en) 2020-04-24

Family

ID=70299555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911218565.6A Pending CN111062276A (en) 2019-12-03 2019-12-03 Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment

Country Status (1)

Country Link
CN (1) CN111062276A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914693A (en) * 2020-07-16 2020-11-10 上海云从企业发展有限公司 Face posture adjusting method, system, device, equipment and medium
CN112069358A (en) * 2020-08-18 2020-12-11 北京达佳互联信息技术有限公司 Information recommendation method and device and electronic equipment
CN112613490A (en) * 2021-01-08 2021-04-06 云从科技集团股份有限公司 Behavior recognition method and device, machine readable medium and equipment
CN112990137A (en) * 2021-04-29 2021-06-18 长沙鹏阳信息技术有限公司 Classroom student sitting posture analysis method based on template matching
CN113194254A (en) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 Image shooting method and device, electronic equipment and storage medium
WO2022188056A1 (en) * 2021-03-10 2022-09-15 深圳市大疆创新科技有限公司 Method and device for image processing, and storage medium
WO2023029991A1 (en) * 2021-09-03 2023-03-09 北京字跳网络技术有限公司 Photographing method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025834A1 (en) * 2009-07-31 2011-02-03 Samsung Electronics Co., Ltd. Method and apparatus of identifying human body posture
CN104869299A (en) * 2014-02-26 2015-08-26 联想(北京)有限公司 Prompting method and device
CN106791364A (en) * 2016-11-22 2017-05-31 维沃移动通信有限公司 Method and mobile terminal that a kind of many people take pictures
CN108846377A (en) * 2018-06-29 2018-11-20 百度在线网络技术(北京)有限公司 Method and apparatus for shooting image
CN108905136A (en) * 2018-07-25 2018-11-30 山东体育学院 A kind of taijiquan learning intelligence movement diagnostic feedback system
CN109005336A (en) * 2018-07-04 2018-12-14 维沃移动通信有限公司 A kind of image capturing method and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025834A1 (en) * 2009-07-31 2011-02-03 Samsung Electronics Co., Ltd. Method and apparatus of identifying human body posture
CN104869299A (en) * 2014-02-26 2015-08-26 联想(北京)有限公司 Prompting method and device
CN106791364A (en) * 2016-11-22 2017-05-31 维沃移动通信有限公司 Method and mobile terminal that a kind of many people take pictures
CN108846377A (en) * 2018-06-29 2018-11-20 百度在线网络技术(北京)有限公司 Method and apparatus for shooting image
CN109005336A (en) * 2018-07-04 2018-12-14 维沃移动通信有限公司 A kind of image capturing method and terminal device
CN108905136A (en) * 2018-07-25 2018-11-30 山东体育学院 A kind of taijiquan learning intelligence movement diagnostic feedback system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914693A (en) * 2020-07-16 2020-11-10 上海云从企业发展有限公司 Face posture adjusting method, system, device, equipment and medium
CN112069358A (en) * 2020-08-18 2020-12-11 北京达佳互联信息技术有限公司 Information recommendation method and device and electronic equipment
CN112613490A (en) * 2021-01-08 2021-04-06 云从科技集团股份有限公司 Behavior recognition method and device, machine readable medium and equipment
WO2022188056A1 (en) * 2021-03-10 2022-09-15 深圳市大疆创新科技有限公司 Method and device for image processing, and storage medium
CN113194254A (en) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 Image shooting method and device, electronic equipment and storage medium
CN112990137A (en) * 2021-04-29 2021-06-18 长沙鹏阳信息技术有限公司 Classroom student sitting posture analysis method based on template matching
CN112990137B (en) * 2021-04-29 2021-09-21 长沙鹏阳信息技术有限公司 Classroom student sitting posture analysis method based on template matching
WO2023029991A1 (en) * 2021-09-03 2023-03-09 北京字跳网络技术有限公司 Photographing method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN111062276A (en) Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment
EP4199529A1 (en) Electronic device for providing shooting mode based on virtual character and operation method thereof
WO2020010979A1 (en) Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand
CN109308205B (en) Display adaptation method, device, equipment and storage medium of application program
CN111726536A (en) Video generation method and device, storage medium and computer equipment
US20110157009A1 (en) Display device and control method thereof
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
CN106303029A (en) The method of controlling rotation of a kind of picture, device and mobile terminal
CN104484858B (en) Character image processing method and processing device
CN112052897B (en) Multimedia data shooting method, device, terminal, server and storage medium
US9137461B2 (en) Real-time camera view through drawn region for image capture
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN111914693A (en) Face posture adjusting method, system, device, equipment and medium
CN107958223A (en) Face identification method and device, mobile equipment, computer-readable recording medium
CN108021905A (en) image processing method, device, terminal device and storage medium
CN112667835A (en) Work processing method and device, electronic equipment and storage medium
US20230224574A1 (en) Photographing method and apparatus
CN112148404A (en) Head portrait generation method, apparatus, device and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN110827195A (en) Virtual article adding method and device, electronic equipment and storage medium
CN112788244B (en) Shooting method, shooting device and electronic equipment
CN113744384B (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN105426904A (en) Photo processing method, apparatus and device
CN112257594A (en) Multimedia data display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 511458 room 1009, No.26, Jinlong Road, Nansha District, Guangzhou City, Guangdong Province (only for office use)

Applicant after: Guangzhou Yuncong Dingwang Technology Co., Ltd

Applicant after: Yuncong Technology Group Co.,Ltd.

Address before: 511458 room 1009, No.26, Jinlong Road, Nansha District, Guangzhou City, Guangdong Province (only for office use)

Applicant before: Guangzhou Jize Technology Co.,Ltd.

Applicant before: Yuncong Technology Group Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200424

RJ01 Rejection of invention patent application after publication