CN112990055A - Posture correction method and device, electronic equipment and storage medium - Google Patents

Posture correction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112990055A
CN112990055A CN202110335331.0A CN202110335331A CN112990055A CN 112990055 A CN112990055 A CN 112990055A CN 202110335331 A CN202110335331 A CN 202110335331A CN 112990055 A CN112990055 A CN 112990055A
Authority
CN
China
Prior art keywords
posture
target
image
target object
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110335331.0A
Other languages
Chinese (zh)
Inventor
孔祥晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110335331.0A priority Critical patent/CN112990055A/en
Publication of CN112990055A publication Critical patent/CN112990055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B19/00Driving, starting, stopping record carriers not specifically of filamentary or web form, or of supports therefor; Control thereof; Control of operating function ; Driving both disc and head
    • G11B19/20Driving; Starting; Stopping; Control thereof
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B31/00Arrangements for the associated working of recording or reproducing apparatus with related apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a posture correction method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: acquiring an image of an object viewing a target device; acquiring the current posture of a target object in the objects according to the image; and under the condition that the current posture belongs to the target posture, adjusting the playing state of the target equipment to prompt the target object to correct the posture. The embodiment of the disclosure can automatically remind the target object to automatically correct, and conveniently and timely assist the target object to watch the film at the correct posture.

Description

Posture correction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a method and an apparatus for posture correction, an electronic device, and a storage medium.
Background
The home audio-video system is also called a home theater. It is a home audio-visual system composed of audio-visual sources (VCD, DVD, etc.), TV set, power (AV) amplifier and sound equipment. With the rapid development of science and technology, more and more families begin to use family videos to meet daily entertainment requirements.
Although the home video can enrich the entertainment life of the user, the user is easily injured, for example, in the process of watching the home video, an incorrect watching posture may cause certain damage to the eyesight of the user. How to assist the user to keep a correct posture to watch the home video becomes a problem to be solved urgently at present.
Disclosure of Invention
The present disclosure presents an attitude correction scheme.
According to an aspect of the present disclosure, there is provided an attitude correction method including:
acquiring an image of an object viewing a target device; acquiring the current posture of a target object in the objects according to the image; and under the condition that the current posture belongs to the target posture, adjusting the playing state of the target equipment to prompt the target object to correct the posture.
In one possible implementation, before the adjusting the play state of the target device, the method further includes: generating a simulated image based on the image, wherein the posture of the object in the simulated image is matched with the standard posture; the adjusting the playing state of the target device to prompt the target object to correct the posture comprises: and controlling the target equipment to display the simulated image so as to prompt the target object to correct the posture according to the simulated image.
In one possible implementation, the shape of the object in the simulated image matches the target object; generating a simulated image based on the image, comprising: extracting the body characteristics of the target object in the image through a first neural network to obtain the body characteristics of the target object; generating an initial simulation image of the target object according to the physical characteristics; and adjusting the posture of the object in the initial simulation image to a standard posture to generate the simulation image.
In a possible implementation manner, before the obtaining, according to the image, a current posture of a target object in the objects, the method further includes: performing attribute identification on an object in the image to acquire attribute information of the object; and determining a target object meeting the preset attribute requirement from the objects based on the attribute information.
In one possible implementation, the attribute information includes an age, and the target object includes an object whose age belongs to a preset age range; the acquiring of the attribute information of the object includes: determining the age of the subject by performing face recognition and/or human key point recognition on the subject.
In a possible implementation manner, the obtaining, according to the image, a current posture of a target object in the objects includes: classifying the postures of the target object in the image through a second neural network to obtain posture classification results of the target object in a plurality of postures, wherein the plurality of postures comprise multiple kinds of standing postures, sitting postures and lying postures; and determining the current posture of the target object according to the posture classification result.
In one possible implementation, the method further includes: performing distance identification according to the image, and determining the current distance between the target object and the target equipment; and under the condition that the current distance belongs to a preset distance range, adjusting the playing state of the target equipment.
In a possible implementation manner, the adjusting the play state of the target device includes at least one of: controlling the target equipment to play indication information aiming at the target object; and adjusting at least one playing parameter in the target device according to a preset time interval to reduce the playing effect of the target device, wherein the playing parameter comprises one or more of saturation, hue and lightness.
In a possible implementation manner, the adjusting the play state of the target device in the case that the current gesture belongs to a target gesture includes: and closing the target equipment under the condition that the time of the current posture belonging to the target posture exceeds a first preset time range.
In one possible implementation manner, after the adjusting the play state of the target device, the method further includes: and under the condition that the time that the current posture belongs to other postures except the target posture exceeds a second preset time range, restoring the playing state of the target equipment to the state before adjustment.
According to an aspect of the present disclosure, there is provided an attitude correction apparatus including:
an image acquisition module for acquiring an image of an object viewing a target device; the gesture obtaining module is used for obtaining the current gesture of a target object in the objects according to the image; and the posture correction module is used for adjusting the playing state of the target equipment under the condition that the current posture belongs to the target posture so as to prompt the target object to correct the posture.
In one possible implementation, before the pose correction module, the method further includes a simulated image generation module to: generating a simulated image based on the image, wherein the posture of the object in the simulated image is matched with the standard posture; the attitude correction module is configured to: and controlling the target equipment to display the simulated image so as to prompt the target object to correct the posture according to the simulated image.
In one possible implementation, the shape of the object in the simulated image matches the target object; the simulated image generation module is further to: extracting the body characteristics of the target object in the image through a first neural network to obtain the body characteristics of the target object; generating an initial simulation image of the target object according to the physical characteristics; and adjusting the posture of the object in the initial simulation image to a standard posture to generate the simulation image.
In one possible implementation, before the gesture obtaining module, the apparatus is further configured to: performing attribute identification on an object in the image to acquire attribute information of the object; and determining a target object meeting the preset attribute requirement from the objects based on the attribute information.
In one possible implementation, the attribute information includes an age, and the target object includes an object whose age belongs to a preset age range; the apparatus is further to: determining the age of the subject by performing face recognition and/or human key point recognition on the subject.
In one possible implementation, the gesture obtaining module is configured to: classifying the postures of the target object in the image through a second neural network to obtain posture classification results of the target object in a plurality of postures, wherein the plurality of postures comprise multiple kinds of standing postures, sitting postures and lying postures; and determining the current posture of the target object according to the posture classification result.
In one possible implementation, the apparatus is further configured to: performing distance identification according to the image, and determining the current distance between the target object and the target equipment; and under the condition that the current distance belongs to a preset distance range, adjusting the playing state of the target equipment.
In one possible implementation, the gesture correction module is to at least one of: controlling the target equipment to play indication information aiming at the target object; and adjusting at least one playing parameter in the target device according to a preset time interval to reduce the playing effect of the target device, wherein the playing parameter comprises one or more of saturation, hue and lightness.
In one possible implementation, the gesture correction module is further configured to: and closing the target equipment under the condition that the time of the current posture belonging to the target posture exceeds a first preset time range.
In one possible implementation, the apparatus is further configured to: and under the condition that the time that the current posture belongs to other postures except the target posture exceeds a second preset time range, restoring the playing state of the target equipment to the state before adjustment.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above-described posture correction method is performed.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described posture correction method.
In the embodiment of the disclosure, by acquiring an image of an object viewing a target device and acquiring a current posture of the target object in the object according to the image, a playing state of the target device is adjusted to prompt the target object to correct the posture when the current posture belongs to the target posture. Through the process, the target object can be automatically reminded to correct the posture by using the target equipment which is watched by the target object under the condition that the posture of the target object belongs to the target posture, and the target object is conveniently and timely assisted to watch the shadow in the correct posture.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a method of gesture correction according to an embodiment of the present disclosure.
FIG. 2 illustrates a flow diagram of a method of gesture correction according to an embodiment of the present disclosure.
FIG. 3 shows a block diagram of a gesture correction apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of an application example according to the present disclosure.
Fig. 5 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a posture correction method according to an embodiment of the present disclosure, which may be applied to a posture correction apparatus, which may be a terminal device, a server, or other processing device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In one example, the posture correction method can be applied to a cloud server or a local server, the cloud server can be a public cloud server or a private cloud server, and the posture correction method can be flexibly selected according to actual conditions.
In some possible implementations, the gesture correction method may also be implemented by way of a processor invoking computer readable instructions stored in a memory.
As shown in fig. 1, in one possible implementation, the posture correction method may include:
in step S11, an image of an object of the viewing target device is acquired.
Step S12, acquiring the current posture of the target object in the object according to the image.
And step S13, under the condition that the current posture belongs to the target posture, adjusting the playing state of the target device to prompt the target object to correct the posture.
The target device may be any device having a playing function, such as a television, a computer, a display screen, a projection apparatus, or a mobile device such as a mobile phone or a tablet computer, and is not limited in the embodiment of the present disclosure.
The object of the viewing target device may be any object, such as a child, an adult, or a pet, and the specific object may be flexibly determined according to the actual situation of the viewing target device. In some possible implementations, the number of objects of the viewing target device may be one or more.
The manner of obtaining the image of the object of the viewing target device is not limited in the embodiments of the present disclosure, and in one possible implementation manner, the posture correction apparatus may perform image acquisition on the object of the viewing target device through some image acquisition devices, such as a camera; in a possible implementation manner, the posture correction device may also not include an image capturing device such as a camera, but directly receive an iso-image captured by an external image capturing device. The setting position of the camera is not limited in the embodiment of the disclosure, and in some possible implementation manners, the viewing device itself may include a camera, and the camera set by the viewing device itself is used for image acquisition; in some possible implementations, one or more cameras or the like may be arranged on the target device or at some position or positions near the target device to acquire images, and how to implement the method may be flexibly determined according to actual situations.
Based on the acquired image, the current pose of the target object in the object may be further acquired from the acquired image through step S12.
The target object may be an object having a requirement for posture correction, and which object or objects in the at least one object are selected as the target object may be flexibly determined according to actual conditions.
In one possible implementation, all objects of the viewing target device may be targeted. Or, selecting a qualified object from the objects based on one or more attributes as a target object, for example, selecting based on the age of the user, and taking a child as the target object, or taking a teenager as the target object; for another example, screening is performed based on the vision condition of the user, and an object with poor vision, such as an object with higher glasses degree or poor vision, can be used as the target object. Or, some preset specific objects are used as target objects. How to select the target object is not limited in the embodiments of the present disclosure, and the target object may include, but is not limited to, the above-mentioned cases.
The mode of obtaining the current posture is not limited in the embodiment of the disclosure, and can be flexibly determined according to the actual situation. In one possible implementation, the current posture of the target object may be determined by detecting, recognizing, etc. key points of the target object based on the image, or may be determined by using other algorithms or posture recognition methods.
In step S13, the target posture may be any posture that adversely affects the vision or other physical conditions of the target subject during the viewing process, and which posture is set as the target posture may be changed flexibly depending on the target device to be viewed. For example, where the target device is a television, the target pose may be a recumbent pose such as back, front, or side; under the condition that the target device is a computer, the target posture can be a prone posture and the like.
And under the condition that the current posture belongs to the target posture, the playing state of the target equipment can be adjusted to prompt the target object to correct the posture. The playing state of the target device may include various states related to playing, such as an image display state and/or a sound playing state of the target device. The mode of adjusting the playing state can be flexibly selected according to the actual situation, for example, the display content can be changed, the posture form of the corrected target object is played on the target device, or the posture of the corrected target object is prompted by playing voice, or the playing effect of the target device is reduced, and the like. Some possible implementations of step S13 can be seen in the following disclosure, which is not first expanded.
In some possible implementation manners, the statistical time of the situation that the current posture belongs to the target posture can be counted, and the playing state of the target device is adjusted to prompt the target object to correct the posture under the condition that whether the statistical time exceeds the preset statistical time, so that the influence caused by mistaken posture correction prompting due to inaccurate acquired current posture is reduced.
The specific time length of the preset statistical time may be set according to an actual situation, for example, may be set to 30 seconds, 1 minute, or 10 minutes, and is not limited in the embodiment of the present disclosure. In one example, the preset statistical time may be set to 10 minutes.
The manner of obtaining the statistical time is not limited in the embodiments of the present disclosure, and is not limited to the following embodiments.
In one possible implementation, the counting may be started at a time when the current posture belongs to the target posture, stopped at a time when the current posture changes to a posture other than the target posture, and started from the beginning again in a case where the current posture changes to the target posture again, so that a continuous time when the current posture belongs to the target posture is counted as the counted time.
In a possible implementation manner, the time counting may be started at a time when the current posture belongs to the target posture, the time counting may be interrupted at a time when the current posture changes to a posture other than the target posture, and the time counting may be performed based on the time of the interrupt time counting when the current posture changes to the target posture again, so that the duration time when the current posture belongs to the target posture is counted as the counted time. In some possible implementation manners, the time may also be counted at a time when the current posture belongs to the target posture, the time is interrupted at a time when the current posture changes to another posture other than the target posture, and when the current posture changes to the target posture again, it is determined whether a time from the time interruption to the time when the current posture changes to the target posture exceeds a preset certain time range (for example, several tens of seconds to several minutes, and the like).
In the embodiment of the disclosure, by acquiring an image of an object viewing a target device and acquiring a current posture of the target object in the object according to the image, a playing state of the target device is adjusted to prompt the target object to correct the posture when the current posture belongs to the target posture. Through the process, the target object can be automatically reminded to correct the posture by using the target equipment which is watched by the target object under the condition that the posture of the target object belongs to the target posture, and the target object is conveniently and timely assisted to watch the shadow in the correct posture.
In one possible implementation manner, before step S13, the method proposed in the embodiment of the present disclosure may further include: based on the image, a simulated image is generated, in which the pose of the object matches the standard pose. Step S13 may include: and controlling the target equipment to display the simulation image so as to prompt the target object to correct the posture according to the simulation image.
The standard posture can be a posture of the target watching device set according to actual conditions, and the target watching device in the standard posture can well protect physical conditions of the target object, such as eyesight and the like. The specific posture form of the standard posture may be flexibly changed according to the actual situation of the target device, for example, in the case that the target device is a television, the standard posture may include a sitting posture, in the case that the target device is a computer, the standard posture may include a sitting posture, and the like, and the specific posture set as the standard posture is not limited in the embodiment of the present disclosure.
The posture of the object in the simulated image is matched with the standard posture, the posture of the object in the simulated image is consistent with the standard posture, or the posture form of the object in the simulated image similar to the standard posture is adopted.
The display form of the object in the simulated image is not limited in the embodiment of the present disclosure, and may be a virtual portrait similar to the target object, or a silhouette form matching with the shape of the target object, and the like, and may be flexibly selected according to the actual situation.
The generation mode of the simulated image can be flexibly selected according to actual conditions, for example, the simulated image can be generated by modeling according to a standard posture, or the current posture of a target object in the image is adjusted based on the standard posture to obtain the simulated image, and the like, which is described in the following disclosure embodiments and is not developed at first.
It should be noted that, the process of generating the simulated image and the process of acquiring the current posture of the target object in step S12, the order of implementing the two processes is not limited in the embodiment of the present disclosure, and can be flexibly determined according to actual situations. In a possible implementation manner, the simulated image and the current pose may be generated and obtained at the same time, and in some possible implementation manners, the simulated image and the current pose may also be sequentially generated or obtained according to a certain order, and which order is selected is not limited in the embodiment of the present disclosure.
The generated simulation image may be stored in any storage location, which is not limited in the embodiments of the present disclosure. In some possible implementations, as described in the above-disclosed embodiments, a simulated image may be displayed on the target device in the event that the current pose belongs to the target pose.
The position at which the simulated image is displayed on the target device is not limited in the embodiment of the present disclosure, and may be flexibly set according to actual situations. For example, the simulated image may be displayed at a fixed location on the target device, such as the lower right, etc.; the simulation image can also be displayed on the target device in a moving mode, such as the simulation image moves from the lower left to the upper right of the target device; the analog image may also be used as a background of the content currently being played by the target device, and displayed with a certain transparency.
By the aid of the method and the device, the simulated image matched with the standard posture can be displayed on the target device, so that the target object can make the standard posture more definite, the simulated image can be continuously utilized to remind the target object to watch the target device in the standard posture, and the posture of the target object can be corrected.
In one possible implementation, the shape of the object in the simulated image may match the target object; based on the image, generating the simulated image may include:
extracting the body characteristics of the target object in the image through a first neural network to obtain the body characteristics of the target object;
generating an initial simulation image of the target object according to the body characteristics;
and adjusting the posture of the object in the initial simulation image to a standard posture to generate a simulation image.
The shape of the object in the simulated image is matched with the target object, the shape of the object in the simulated image may be completely consistent with the target object, or the proportion of the shape of the object in the simulated image may be the same as or similar to that of the target object. In some possible implementations, it may also be possible that the body contour, the shape, and the like of the object in the simulated image are matched with the target object, for example, the height, the thickness, and the like of the object in the simulated image, and the rough shape, and the like of the body may be matched with the target object. Specifically, which body parts of the object in the simulation object are set to be matched with the target object, the body parts can be flexibly set according to actual conditions, and the limitation is not limited in the embodiment of the disclosure as long as the target object can better determine how to adjust to the standard posture based on the body parts of the object in the simulation object.
The first neural network may be any neural network having a function of extracting a physical feature of a target object in an image, and an implementation form thereof may be flexibly determined according to an actual situation, and is not limited in the embodiments of the present disclosure, and is not limited in the following disclosure. As described in the foregoing embodiments, the shape of the target object may be a body contour and an outline of the target object, such as a tall, short, fat, thin, and the like outline and outline of the target object, and a rough shape of the body, and the extracted body characteristics may also flexibly change according to the content included in the selected shape, so in some possible implementations, the implementation form of the first neural network may also flexibly change according to the different required shapes.
After the body characteristics of the target object in the image are extracted through the first neural network in any form, an initial simulation image of the target object can be generated according to the body characteristics. As described in the foregoing embodiments, the implementation form of the object in the simulated image may be flexibly changed, such as a silhouette or a virtual character, and the manner of generating the initial simulated image of the target object according to the physical characteristics may also be flexibly changed according to the object in the simulated image. For example, the outline of a preset silhouette or virtual character may be adjusted according to the extracted physical features to generate an initial simulation image of the target object; the extracted physical characteristics can be used as parameters to directly generate a corresponding object, and the object can be used as an object in an initial simulation image.
In some possible implementations, the initial simulation image may be an image only including the object, or the initial simulation image may also include the object and some preset backgrounds, and the included background contents and forms are not limited in the embodiments of the present disclosure.
As described in the above disclosed embodiments, the pose of the object in the simulated image matches the standard pose, and thus in some possible implementations, the pose of the object in the initial simulated image may be adjusted to the standard pose to generate the simulated image. The manner of adjusting the pose of the object in the initial simulated image to the standard pose is not limited in the embodiments of the present disclosure, and is not limited to the following disclosed embodiments. In one possible implementation manner, the standard posture can be used as a template, and the posture of the object in the initial simulation image is matched according to the form of the template to obtain a simulation image consistent with the standard posture; in a possible implementation manner, the standard posture and the extracted body feature are both used as parameters, and a simulated image or the like having the standard posture and a body matching the target object can be directly generated based on the standard posture and the extracted body feature.
Through the process, the second neural network can be used for conveniently generating the simulated image which is matched with the body of the target object and has the standard posture, the generation efficiency and the convenience of the simulated image are effectively improved, and the target object can be better corrected according to the posture of the simulated image and the posture correction effect is improved because the body of the object in the simulated image is matched with the target object.
In one possible implementation manner, before step S12, the method provided by the embodiment of the present disclosure may further include:
carrying out attribute identification on an object in the image to acquire attribute information of the object;
and determining a target object meeting the preset attribute requirement from the objects based on the attribute information.
The attribute information may be the attribute for screening the object mentioned in the above disclosed embodiment, such as age, eyesight, and the like. With the difference of the attribute information, the method for performing attribute recognition on the object in the image may also be flexibly changed, for example, the age of the object may be obtained through a method for age recognition such as face recognition or human body recognition, or the eyesight of the object may be obtained through a method for face recognition and eye key point recognition.
The preset attribute requirements may vary according to the acquired attribute information, and are not limited to the following disclosure embodiments. For example, in some possible implementations, when the attribute information includes an age, the preset attribute requirement may be a range of the age, such as an age range of children of 0 to 6 years old, an age range of teenagers of 6 to 14 years old, or a range interval of 0 to 14 years old; in some possible implementations, where the attribute information includes vision, the preset attribute requirement may be a vision range of the subject, such as less than 5.0.
By the embodiment of the disclosure, the target object meeting the preset attribute requirement can be screened out from the objects by utilizing the attribute identification of the image, so that the playing state of the target device can be pertinently adjusted based on the current posture of the target object to prompt the target object to correct the posture.
In one possible implementation manner, the target object may include an object whose age belongs to a preset range, and the obtaining of the attribute information of the object may include:
the age of the subject is determined by performing face recognition and/or body keypoint recognition on the subject.
For the implementation form of the age within the preset range, reference may be made to the above-mentioned embodiments, which are not described herein again. The specific way of determining the age of the subject by performing face recognition and/or human body key point recognition on the image can be flexibly determined according to actual conditions, and is not limited to the following disclosure embodiments.
The way of determining the age by face recognition can be flexibly changed, in a possible implementation way, the face key points of each object can be determined by an image through a face key point recognition neural network, and the age of each object is calculated based on the face key points to respectively determine the age of at least one object; in one possible implementation, the age of the at least one subject may also be determined directly from the image through a face age recognition neural network based on an output of the face age recognition neural network. The specific implementation form of each neural network is not limited in the embodiment of the present disclosure, and any neural network having the above function may be used in the embodiment of the present disclosure.
The manner of identifying the human body key points may also be flexibly changed, and in a possible implementation manner, the image may be used to identify a plurality of human body bone key points in the subject through a human body bone key point identification neural network, so as to determine the height of the subject based on the distance between different bone key points (for example, the distance between a head key point and a foot key point), and determine the age of the subject according to the mapping relationship between the height and the age.
In some possible implementations, the age of the object may also be determined jointly through multiple age determination manners, for example, the age of at least one object may be determined jointly through two manners, namely, a face keypoint recognition neural network and a face age recognition neural network, respectively, and then the two determination results are averaged (for example, weighted average) to obtain the final age of the object; the first recognition age of at least one object can be determined through the face age recognition neural network, then under the condition that the first recognition age belongs to the preset age range, the height information of the object in the image is obtained through the recognition of key points of the human body, and whether the recognized first recognition age is accurate or not is judged by utilizing the corresponding condition between the height information and the age, so that the final age of the object is determined, and the like.
The preset age range can be flexibly set according to actual conditions and is not limited to the following disclosed embodiments. In one example, the preset age range may be set to 14 years or less; the predetermined age range may be set to 6 to 14 years.
By the aid of the method and the device, the age of the object can be flexibly determined by various modes such as face recognition and/or human key point recognition, so that the flexibility and the accuracy of the mode for determining the target object from the object are improved, and the flexibility and the pertinence of the whole posture correction method are improved.
Fig. 2 shows a flowchart of a posture correction method according to an embodiment of the present disclosure, and as shown in the figure, in one possible implementation, step S12 may include:
step S121, classifying the postures of the target object in the image through a second neural network to obtain posture classification results of the target object in a plurality of postures, wherein the plurality of postures comprise multiple kinds of standing postures, sitting postures and lying postures;
and step S122, determining the current posture of the target object according to the posture classification result.
The second neural network may be any neural network having a posture classification function, and its implementation form may be flexibly determined according to actual situations, and is not limited to the following disclosed embodiments. In a possible implementation manner, the second neural network may include a feature extraction network and/or a classification network, the feature extraction network may perform feature extraction on a target object in the image to obtain feature information of the target object, and the classification network may perform pose classification on the image according to the feature information to obtain a pose classification result of the target object in a plurality of poses. The feature extraction network and the classification network in the second neural network may be a complete neural network with an independent structure, or may be one or more neural network layers with certain functions in the second neural network.
The feature extraction network may be any neural network having a feature extraction function, and the implementation form thereof is not limited in the embodiment of the present disclosure. As described in the embodiments disclosed above, in one possible implementation, the feature extraction network may be a neural network with a smaller network size, such as a neural network model smaller than some common neural networks (e.g., a ResNet50 neural network or a ResNet18 neural network).
In a possible implementation manner, the network scale of the feature extraction network is smaller, and the quantity of some data or a plurality of data in the quantity of parameters, the number of layers or the quantity of channels in the feature neural network is smaller; the specific situation can be flexibly determined according to the actual situation, and the embodiment of the disclosure is not limited.
By reducing the number of shallow parameters, layers and channels of the neural network, the attention of the characteristic extraction network to characteristic information such as action attribute characteristics, color texture characteristics and the like irrelevant to posture classification can be reduced, and the speed of characteristic extraction can be greatly reduced. Further, based on the feature extraction network with a small network scale, the time consumption and training time for the feature extraction network can be increased.
In some possible implementation manners, the feature extraction network included in the second neural network may also be used for implementing feature extraction in the first neural network, and in some possible implementation manners, the first neural network may also be completely different from the second neural network, and may be flexibly selected according to actual situations.
As described in the above disclosed embodiments, the second neural network may also include a classification network. In a possible implementation manner, the posture classification of the feature information is performed through a classification network, and a posture classification result of the target object in a plurality of postures can be obtained.
The classification network may be any neural network having a posture classification function, and the implementation manner of the classification network may be flexibly determined according to actual situations, and is not limited to the following disclosure embodiments. In one possible implementation, the classification network may be a single classification layer with gesture classification functionality; in a possible implementation manner, the classification network may also be a classification network formed by multiple neural network layers, and the like, which layers are specifically included, and what operation needs to be performed by the layers may be flexibly determined according to actual situations. Other implementations of classification networks may be found in the following disclosure, not first expanded. The classification network classifies the postures of the target objects according to the characteristic information, and the types and forms of the obtained posture classification results can be flexibly determined according to actual conditions.
Through the process, the second neural network can be directly utilized to realize the posture classification of the target object, the end-to-end posture classification is realized, and the speed and the efficiency for determining the posture of the target object are improved.
In one possible implementation, the classification network may include a transformation sub-network and/or a classification sub-network, and in one possible implementation, the process of classifying the pose of the target object according to the feature information extracted from the image may include:
carrying out dimension transformation on the feature information through a transformation sub-network to obtain feature information after dimension transformation;
and carrying out attitude classification on the feature information after the dimensionality transformation through a classification sub-network to obtain attitude classification results of the target object in a plurality of attitudes.
The transformation sub-network may be a neural network having a dimension transformation function, and an implementation manner of the transformation sub-network may be flexibly set according to an actual situation, which is not limited in the embodiment of the present disclosure. As described in the above-mentioned embodiments, the feature information can be dimension-transformed by the transformation sub-network, and particularly how to transform, and the implementation manner can be flexibly determined according to the actual situation of the transformation sub-network. In one possible implementation, the transformation sub-network may have a dimension reduction function, in which case the transformation sub-network may perform dimension reduction on the input feature information to obtain feature information after the dimension transformation, which implements the feature dimension reduction.
The classification sub-network can be a neural network with a posture classification function, and the implementation form of the classification sub-network can be flexibly determined according to the actual situation. And are not limited to the disclosed embodiments described below. In one possible implementation, the classification sub-network may include only one classification layer, and in one possible implementation, the classification sub-network may be a neural network with a posture classification function, which is configured in any form through different convolution layers, pooling layers, classification layers, or other various layers. As can be seen from the foregoing disclosure, in a possible implementation manner, the feature information after the dimension transformation may be subjected to posture classification by a classification sub-network, so as to obtain a classification result of the target object.
In a possible implementation manner, the classification network proposed in the embodiment of the present disclosure may only include a classification sub-network, and in a possible implementation manner, the classification network proposed in the embodiment of the present disclosure may be formed by a transformation sub-network and a classification sub-network. Through the implementation mode of the classification network, on one hand, the flexibility of the posture classification implementation process can be improved, on the other hand, after the feature information is subjected to dimension transformation through the transformation sub-network, the posture classification is carried out, the data quantity to be processed by the classification sub-network can be reduced, and the posture classification speed is improved.
The implementation forms of the plurality of gestures can be flexibly determined according to the actual requirements of gesture classification. As described in the above disclosed embodiments, in one possible implementation, the plurality of gestures may include: one or more of a standing position, a sitting position, and a lying position, wherein the standing position can include a front standing position and/or a side standing position, the sitting position can include a front sitting position and/or a side sitting position, and the lying position can include one or more of a side lying position, a front lying position, and a back lying position.
The obtained posture classification result may include probabilities that the target object belongs to the plurality of postures respectively and/or a posture that the target object has the highest probability among the plurality of postures. The implementation form of step S122 may be flexibly determined according to an actual situation, in some possible implementation manners, when the posture classification result includes probabilities that the target object belongs to a plurality of postures, respectively, a certain posture or certain postures with the highest probability may be determined as the current posture of the target object, and when the posture classification result includes a posture that the target object has the highest probability among the plurality of postures, a posture corresponding to the posture classification result may be directly used as the current posture of the target object, and the like.
Through the process, end-to-end posture classification can be realized by using the second neural network, the situation that key points of the target object are introduced to realize posture classification is reduced, the calculation amount of posture classification is reduced, the speed of posture classification is improved, and multiple postures of the target object can be distinguished to obtain accurate posture information.
In a possible implementation manner, the posture correction method provided in the embodiment of the present disclosure may further include:
performing distance identification according to the image, and determining the current distance between the target object and the target equipment;
and under the condition that the current distance belongs to the preset distance range, adjusting the playing state of the target equipment.
The method for identifying the distance according to the image is not limited in the embodiments of the present disclosure, and may be flexibly selected according to actual situations, and any method that can identify the distance according to the image may be used as an implementation method in the embodiments of the present disclosure, and is not limited to the following embodiments. In one possible implementation, the image may be input to a neural network having a distance recognition function, so as to determine a current distance between the target object and the target device in the image according to an output of the neural network having the distance recognition function; in one possible implementation, the pupil of the target object may be identified from the image, and the current distance between the target object and the target device may be determined based on the distance between the pupil and the target device.
After the current distance between the target object and the target device is determined by the manner of any of the above-described embodiments, the playing state of the target device may be adjusted when the current distance belongs to a preset distance range. For example, in the case that the target device is a television, the preset distance range may be larger, in the case that the target device is a computer or other device with a smaller screen, the preset distance range may be smaller, and how to set the preset distance range is not limited in the embodiment of the present disclosure. The current distance belongs to a preset distance range, and the situation that the current posture belongs to the target posture can be referred to, that is, the playing state of the target device can be adjusted from the moment when the current distance belongs to the preset distance range, the time that the current distance belongs to the preset distance range can also be counted, and the playing state is adjusted after the counted time exceeds a certain preset time.
The manner of adjusting the play status of the target device may be according to various disclosed embodiments, and is not described herein again.
The method comprises the steps of identifying the distance according to the image, determining the current distance between the target object and the target device, and adjusting the playing state of the target device under the condition that the current distance belongs to the preset distance range.
In one possible implementation, step S13 further includes at least one of:
and controlling the target equipment to play the indication information aiming at the target object.
And adjusting at least one playing parameter in the target device according to a preset time interval to reduce the playing effect of the target device, wherein the playing parameter comprises one or more of saturation, hue and lightness.
The indication information for the target object may direct the target object to perform a corresponding operation, such as adjusting a posture and/or adjusting a viewing distance. The specific implementation form and the playing form of the indication are not limited in the embodiments of the present disclosure, and are not limited to the following embodiments of the disclosure. In a possible implementation manner, the indication information may include a voice indication that the target object is prompted to view the target device in a posture of the simulated image by voice, or the target object is prompted to be too close to or too far away from the target device by voice, or the like. In one possible implementation, the indication information may also include a text indication, such as displaying the text indication on the viewing device, prompting the target object to adjust the posture according to the simulated image, or prompting the target object to adjust the current viewing distance in which direction the target object needs to be adjusted. In a possible implementation manner, the indication information may also include both a text indication and a voice indication.
As described in the foregoing disclosure, in a possible implementation manner, adjusting the playing state of the target device may also include adjusting at least one playing parameter in the target device, such as saturation, hue, brightness, and the like, according to a preset time interval, which playing parameters are specifically adjusted and how to adjust the playing parameters in the present disclosure are not limited, and may be flexibly selected according to an actual situation, and any manner that can reduce the playing effect and reduce the experience of the target object in watching the target device may be used as an adjustment manner of the playing parameters. The preset time interval may also be flexibly determined according to actual situations, and is not limited to the following embodiments, for example, in an example, the playing effect of the target device may be further reduced on the current basis every one minute.
In some possible implementations, adjusting the playing status of the target device may also include other forms, for example, in a case where the indication of the target object is played by the target device, the degree of the indication may also be increased at certain time intervals, for example, the volume of the voice indication is increased or the font and the display status of the text indication are enlarged.
By controlling the target device to play the indication information for the target object and/or adjusting at least one play parameter in the target device according to the preset time interval, the current viewing state of the target object can be interfered under the condition that the viewing posture of the target object is not adjusted, and strong reminding for correcting the posture, the distance and the like of the target object is continuously performed, so that the viewing posture of the target object is corrected as soon as possible, and the physiological health of the target object is protected.
In one possible implementation, step S13 may further include:
and under the condition that the time that the current posture belongs to the target posture exceeds a first preset time range, closing the target equipment.
The specific time length of the first preset time range may be set according to an actual situation, and is not limited in the embodiment of the present disclosure.
In a possible implementation manner, in the case that the time when the current posture belongs to the target posture exceeds the first preset time range, the target object can be considered to be in an improper viewing posture for a long time, so that the posture of the target object can be corrected by a forced means of turning off the target device, and the physiological health of the target object can be protected.
Through the embodiment of the disclosure, the posture of the target object for watching the target device can be forcibly corrected by closing the target device, so that the body health of the target object is well protected.
In one possible implementation, step S13 may further include: and under the condition that the time that the current posture belongs to other postures except the target posture exceeds a second preset time range, restoring the playing state of the target equipment to the state before adjustment.
The specific time length of the second preset time range may also be set according to an actual situation, which is not limited in the embodiment of the present disclosure; the numbers such as "first" and "second" in the preset time range are only used for distinguishing time judgments under different conditions, and do not limit the specific duration of the time ranges, and the time lengths of the preset time ranges with different numbers may be the same or different.
In one possible implementation, in the case that the time when the current posture belongs to a posture other than the target posture exceeds the second preset time range, the target object may be considered to have completed posture correction, in which case the play state of the target device may be restored to the state before adjustment. The recovery mode may be a reverse operation of adjusting the playing state of the target device in the above disclosed embodiments, such as stopping displaying the simulation image, stopping playing the indication information of the target object, or adjusting the playing parameter according to a preset time interval to improve the playing effect of the target device.
Through the embodiment of the disclosure, the state of the target device can be flexibly and timely adjusted along with the change of the current posture of the target object, so that the posture of the target object is corrected, and the viewing experience of the target object is improved.
FIG. 3 shows a block diagram of a gesture correction apparatus according to an embodiment of the present disclosure. As shown, the posture-correcting device 20 may include:
an image acquisition module 21, configured to acquire an image of an object of the viewing target device.
And the gesture obtaining module 22 is configured to obtain a current gesture of a target object in the objects according to the image.
And the posture correction module 23 is configured to adjust a playing state of the target device to prompt the target object to correct the posture when the current posture belongs to the target posture.
In one possible implementation, before the pose correction module, the method further includes a simulated image generation module configured to: generating a simulated image based on the image, wherein the posture of the object in the simulated image is matched with the standard posture; the attitude correction module is used for: and controlling the target equipment to display the simulation image so as to prompt the target object to correct the posture according to the simulation image.
In one possible implementation, the shape of the object in the simulated image matches the target object; the simulated image generation module is further to: extracting the body characteristics of the target object in the image through a first neural network to obtain the body characteristics of the target object; generating an initial simulation image of the target object according to the body characteristics; and adjusting the posture of the object in the initial simulation image to a standard posture to generate a simulation image.
In one possible implementation, before the gesture obtaining module, the apparatus is further configured to: carrying out attribute identification on an object in the image to acquire attribute information of the object; and determining a target object meeting the preset attribute requirement from the objects based on the attribute information.
In one possible implementation, the attribute information includes an age, and the target object includes an object whose age belongs to a preset age range; the apparatus is further to: the age of the subject is determined by performing face recognition and/or body keypoint recognition on the subject.
In one possible implementation, the gesture obtaining module is configured to: classifying the postures of the target object in the image through a second neural network to obtain posture classification results of the target object in a plurality of postures, wherein the plurality of postures comprise multiple kinds of standing postures, sitting postures and lying postures; and determining the current posture of the target object according to the posture classification result.
In one possible implementation, the apparatus is further configured to: performing distance identification according to the image, and determining the current distance between the target object and the target equipment; and under the condition that the current distance belongs to the preset distance range, adjusting the playing state of the target equipment.
In one possible implementation, the gesture correction module is to at least one of: controlling the target equipment to play indication information aiming at the target object; and adjusting at least one playing parameter in the target device according to a preset time interval to reduce the playing effect of the target device, wherein the playing parameter comprises one or more of saturation, hue and lightness.
In one possible implementation, the posture correction module is further configured to: and under the condition that the time that the current posture belongs to the target posture exceeds a first preset time range, closing the target equipment.
In one possible implementation, the apparatus is further configured to: and under the condition that the time that the current posture belongs to other postures except the target posture exceeds a second preset time range, restoring the playing state of the target equipment to the state before adjustment.
Application scenario example
Fig. 4 is a schematic diagram illustrating an application example according to the present disclosure, as shown in the figure, a child may watch a video in an incorrect posture during watching home videos and audios. The application example of the disclosure provides a posture correction method, which can correct the posture of a child under the condition that the viewing posture of the child is incorrect.
The posture correction method provided by the application example of the disclosure may include the following processes:
firstly, identifying the image of a film watching person watching target equipment through a face attribute model network so as to judge the age interval of the film watching person, and if the judged film watching person is less than 14 years old, determining that the film watching person is a target object (namely children) needing to correct the film watching posture.
And secondly, processing the image of the target object through a limb contour model network or identifying the basic form of the child from the image, thereby simulating the figure of the audience with the form consistent with that of the child and the standard form as a simulated image.
And thirdly, judging which posture the current viewing posture of the child is, such as standing posture, sitting posture or lying posture and the like, wherein the lying posture can comprise backward leaning, forward leaning, lateral lying and the like, through the posture identification model network.
And fourthly, starting timing under the condition that the film watching posture of the child is the lying posture, starting virtual image display of the child in the target device for film watching under the condition that the timing reaches 10 minutes and the film watching posture of the child is not changed and still in the lying posture, and reminding the child of needing to watch the film according to the posture of the analog image by voice.
And fifthly, timing while displaying the analog image, if the posture of the child is not corrected, further emphasizing the film watching posture to be corrected every one minute, and reducing the saturation, hue, lightness and the like of the playing film to cause difficulty in watching the film by the child.
And sixthly, if the posture of the child is not corrected, continuing the process of the fifth step until the child cannot view the film completely, and recovering the normal film playing effect under the condition that the child can adjust the posture to view the film according to the simulated image.
And seventhly, in addition, distance recognition can be added in the process, for example, the pupillary distance of the child in the image is used for judgment, and the distance between the child and the target equipment is estimated, so that the viewing distance of the child is corrected in time, and the like.
The posture correction method provided in the application example of the disclosure can be applied to not only scenes in which children watch home applications, but also other fields, such as correcting the posture of teenagers watching multimedia equipment, or correcting the posture of workers using computers, and the like.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile computer readable storage medium or a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
In practical applications, the memory may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
The electronic device may be provided as a terminal, server, or other form of device.
Based on the same technical concept of the foregoing embodiments, the embodiments of the present disclosure also provide a computer program, which when executed by a processor implements the above method.
Fig. 5 is a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 is a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present disclosure by utilizing state personnel information of the computer-readable program instructions to personalize the electronic circuitry.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. An attitude correction method, comprising:
acquiring an image of an object viewing a target device;
acquiring the current posture of a target object in the objects according to the image;
and under the condition that the current posture belongs to the target posture, adjusting the playing state of the target equipment to prompt the target object to correct the posture.
2. The method of claim 1, wherein prior to said adjusting the play state of the target device, the method further comprises:
generating a simulated image based on the image, wherein the posture of the object in the simulated image is matched with the standard posture;
the adjusting the playing state of the target device to prompt the target object to correct the posture comprises:
and controlling the target equipment to display the simulated image so as to prompt the target object to correct the posture according to the simulated image.
3. The method of claim 2, wherein the shape of the object in the simulated image matches the target object;
generating a simulated image based on the image, comprising:
extracting the body characteristics of the target object in the image through a first neural network to obtain the body characteristics of the target object;
generating an initial simulation image of the target object according to the physical characteristics;
and adjusting the posture of the object in the initial simulation image to a standard posture to generate the simulation image.
4. The method according to any one of claims 1 to 3, wherein before the obtaining of the current pose of the target object in the objects according to the image, the method further comprises:
performing attribute identification on an object in the image to acquire attribute information of the object;
and determining a target object meeting the preset attribute requirement from the objects based on the attribute information.
5. The method according to claim 4, wherein the attribute information includes an age, and the target object includes an object whose age belongs to a preset age range;
the acquiring of the attribute information of the object includes:
determining the age of the subject by performing face recognition and/or human key point recognition on the subject.
6. The method according to any one of claims 1 to 5, wherein the obtaining a current posture of a target object in the objects according to the image comprises:
classifying the postures of the target object in the image through a second neural network to obtain posture classification results of the target object in a plurality of postures, wherein the plurality of postures comprise multiple kinds of standing postures, sitting postures and lying postures;
and determining the current posture of the target object according to the posture classification result.
7. The method according to any one of claims 1 to 6, further comprising:
performing distance identification according to the image, and determining the current distance between the target object and the target equipment;
and under the condition that the current distance belongs to a preset distance range, adjusting the playing state of the target equipment.
8. The method according to any one of claims 1 to 7, wherein the adjusting the play state of the target device comprises at least one of:
controlling the target equipment to play indication information aiming at the target object;
and adjusting at least one playing parameter in the target device according to a preset time interval to reduce the playing effect of the target device, wherein the playing parameter comprises one or more of saturation, hue and lightness.
9. The method according to any one of claims 1 to 8, wherein the adjusting the play state of the target device in the case that the current gesture belongs to a target gesture comprises:
and closing the target equipment under the condition that the time of the current posture belonging to the target posture exceeds a first preset time range.
10. The method according to any one of claims 1 to 9, wherein after said adjusting the play state of the target device, the method further comprises:
and under the condition that the time that the current posture belongs to other postures except the target posture exceeds a second preset time range, restoring the playing state of the target equipment to the state before adjustment.
11. An attitude correction device characterized by comprising:
an image acquisition module for acquiring an image of an object viewing a target device;
the gesture obtaining module is used for obtaining the current gesture of a target object in the objects according to the image;
and the posture correction module is used for adjusting the playing state of the target equipment under the condition that the current posture belongs to the target posture so as to prompt the target object to correct the posture.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202110335331.0A 2021-03-29 2021-03-29 Posture correction method and device, electronic equipment and storage medium Pending CN112990055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110335331.0A CN112990055A (en) 2021-03-29 2021-03-29 Posture correction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110335331.0A CN112990055A (en) 2021-03-29 2021-03-29 Posture correction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112990055A true CN112990055A (en) 2021-06-18

Family

ID=76337946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110335331.0A Pending CN112990055A (en) 2021-03-29 2021-03-29 Posture correction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112990055A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114220123A (en) * 2021-12-10 2022-03-22 江苏泽景汽车电子股份有限公司 Posture correction method and device, projection equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447496A (en) * 2020-03-24 2020-07-24 深圳创维-Rgb电子有限公司 User posture correction method and device, storage medium and smart television
CN112036307A (en) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
WO2021052016A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Body posture detection method and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021052016A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Body posture detection method and electronic device
CN111447496A (en) * 2020-03-24 2020-07-24 深圳创维-Rgb电子有限公司 User posture correction method and device, storage medium and smart television
CN112036307A (en) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114220123A (en) * 2021-12-10 2022-03-22 江苏泽景汽车电子股份有限公司 Posture correction method and device, projection equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106792004B (en) Content item pushing method, device and system
CN107582028B (en) Sleep monitoring method and device
EP2977956B1 (en) Method, apparatus and device for segmenting an image
CN104125396B (en) Image capturing method and device
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
WO2017088470A1 (en) Image classification method and device
CN105554389B (en) Shooting method and device
WO2018120662A1 (en) Photographing method, photographing apparatus and terminal
CN107944367B (en) Face key point detection method and device
CN105357425B (en) Image capturing method and device
CN110248254A (en) Display control method and Related product
EP3113071A1 (en) Method and device for acquiring iris image
CN110636315B (en) Multi-user virtual live broadcast method and device, electronic equipment and storage medium
CN111986076A (en) Image processing method and device, interactive display device and electronic equipment
CN110909654A (en) Training image generation method and device, electronic equipment and storage medium
CN107341509B (en) Convolutional neural network training method and device and readable storage medium
EP3258414A1 (en) Prompting method and apparatus for photographing
EP3040912A1 (en) Method and device for classifying pictures
CN113194254A (en) Image shooting method and device, electronic equipment and storage medium
WO2020093798A1 (en) Method and apparatus for displaying target image, terminal, and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN110798726A (en) Bullet screen display method and device, electronic equipment and storage medium
CN113409342A (en) Training method and device for image style migration model and electronic equipment
CN107424130B (en) Picture beautifying method and device
CN109145878B (en) Image extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618

RJ01 Rejection of invention patent application after publication