CN110244925B - Image display device, method and system - Google Patents

Image display device, method and system Download PDF

Info

Publication number
CN110244925B
CN110244925B CN201910358971.6A CN201910358971A CN110244925B CN 110244925 B CN110244925 B CN 110244925B CN 201910358971 A CN201910358971 A CN 201910358971A CN 110244925 B CN110244925 B CN 110244925B
Authority
CN
China
Prior art keywords
frame image
image
template
feature vector
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910358971.6A
Other languages
Chinese (zh)
Other versions
CN110244925A (en
Inventor
马啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN201910358971.6A priority Critical patent/CN110244925B/en
Publication of CN110244925A publication Critical patent/CN110244925A/en
Application granted granted Critical
Publication of CN110244925B publication Critical patent/CN110244925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image display device, an image display method and an image display system. The device comprises: the single-sided mirror and the display assembly are overlapped and arranged on the front surface of the display assembly; the display component is used for acquiring the template video image in the template video file and displaying the template video image; the single-sided mirror is used for displaying the real-time image of the object to be displayed. The device can be used by a user conveniently, so that the coordination and the accuracy of the actions of the device are improved.

Description

Image display device, method and system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image display device, method, and system.
Background
With the progress of social economy, people's life is becoming more and more rich. Personal training projects such as dance and yoga are also increasingly integrated into people's daily lives.
Often people practice by playing a video and imitating the teaching action in the video. For example, a mobile phone is used for playing yoga teaching videos, and then the movement of the mobile phone is observed through a mirror so as to ensure the movement standard in the training process. However, due to the single focusing of the human eye, only one target object can be focused at a time, and when doing exercises, the exerciser needs to swing his head continuously, and the eyes need to be switched back and forth between the mobile phone and the mirror, so as to ensure the accuracy of the exercise actions.
By adopting the traditional mode to practice, the attention of a practitioner is easy to be dispersed, and the action on the visual teaching video and the action of the user in the mirror are often not the same action at the same moment due to the time difference of the eye switch, so that the use of the device is inconvenient.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image display device, method and system that can be used easily.
In a first aspect, an embodiment of the present application provides an image display apparatus, including: the single-sided mirror and the display assembly are overlapped and arranged on the front surface of the display assembly;
the display component is used for acquiring the template video image in the template video file and displaying the template video image;
the single-sided mirror is used for displaying the real-time image of the object to be displayed.
In one embodiment, the display module is further configured to receive resizing information input by a user, where the resizing information is used to resize the template video image.
In one embodiment, the apparatus further comprises: the data acquisition unit is connected with the processing unit;
The data acquisition unit is used for acquiring the actual video image of the object to be displayed;
the processing unit is used for extracting key frames of the actual video images to obtain actual frame images, and comparing the actual frame images with template frame images in the template video images to obtain analysis results of the actual frame images; the analysis result is used for representing the similarity degree of the object gestures in the actual frame image and the template frame image; the template frame image is an image obtained by extracting a key frame from the template video image.
In one embodiment, the apparatus further comprises a storage unit;
and the storage unit is used for storing the analysis result and/or the actual video image.
In one embodiment, the apparatus further comprises a communication unit;
and the communication unit is used for sending the analysis result and/or the actual video image to a cloud server.
In one embodiment, the communication unit is further configured to download the template video file from a cloud server, or receive the template video file from a user terminal.
In one embodiment, the data acquisition unit and/or the processing unit is a data acquisition unit and/or a processing unit provided on a user terminal.
In one embodiment, if the data acquisition unit is disposed on the user terminal, the communication unit is further configured to receive the actual video image acquired by the data acquisition unit.
In a second aspect, an embodiment of the present application provides an image display system, including an image display device according to any one of the above embodiments.
In a third aspect, an embodiment of the present application provides an image display method, which is applied to the image display device in any one of the foregoing embodiments, and includes:
acquiring a template video image in a template video file;
and displaying the template video image and the real-time image of the object to be displayed in the same area.
In one embodiment, the method further comprises:
receiving size adjustment information input by a user;
and adjusting the size of the template video image according to the size adjustment information.
In one embodiment, the method further comprises:
acquiring an actual video image of the object to be displayed;
extracting key frames from the actual video images to obtain actual frame images;
comparing the actual frame image with a template frame image in the template video image to obtain an analysis result of the actual frame image; the analysis result is used for representing the similarity degree of the object gestures in the actual frame image and the template frame image; the template frame image is an image obtained by extracting a key frame from the template video image.
In one embodiment, the analysis result includes a total quantization value of a degree of similarity of the actual frame image and the template frame image; comparing the actual frame image with the template frame image in the template video image to obtain an analysis result of the actual frame image, wherein the analysis result comprises the following steps:
acquiring moment feature vector quantization values of the actual frame image and the template frame image;
acquiring a cosine similarity quantization value of the actual frame image and the template frame image;
and determining the total quantization value according to a preset quantization value weight coefficient, the moment feature vector quantization value and the cosine similarity quantization value.
In one embodiment, the determining the total quantization value according to a preset quantization value weight coefficient, the moment eigenvector quantization value, and the cosine similarity quantization value includes:
normalizing the moment feature vector quantized value and the cosine similarity quantized value, and multiplying the normalized moment feature vector quantized value and the cosine similarity quantized value by corresponding quantized value weight coefficients respectively to obtain the total quantized value.
In one embodiment, the obtaining the moment feature vector quantization value of the actual frame image and the template frame image includes:
Carrying out portrait segmentation on the actual frame image to obtain a target portrait;
extracting the outline of the target portrait to obtain a target shape;
extracting shape characteristics of the target shape to obtain a target geometric invariant moment;
matching the target geometric invariant moment with the geometric invariant moment of the object in the template frame image to obtain a moment feature vector distance between the actual frame image and the template frame image;
and determining the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and a preset moment feature vector range.
In one embodiment, the determining the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and the preset moment feature vector range includes:
if the distance between the moment feature vectors is smaller than or equal to the minimum value of the preset moment feature vector range, determining the quantized value of the moment feature vectors as a first value;
if the moment feature vector distance is larger than the minimum value of the moment feature vector range and smaller than the maximum value of the moment feature vector range, calculating the moment feature vector quantized value according to the moment feature vector distance, the minimum value of the moment feature vector range and the maximum value of the moment feature vector range;
And if the moment characteristic vector distance is greater than or equal to the maximum value of the moment characteristic vector range, determining the moment characteristic vector quantized value as a second value.
In one embodiment, the calculating the moment feature vector quantization value according to the moment feature vector distance, the minimum value of the moment feature vector range, and the maximum value of the moment feature vector range includes:
according to the formula
Figure BDA0002046296850000051
And determining a moment characteristic vector quantization value p, wherein L is a moment characteristic vector distance, emin is the minimum value of a moment characteristic vector range, and emax is the maximum value of the moment characteristic vector range.
In one embodiment, the acquiring the cosine similarity quantization value of the actual frame image and the template frame image includes:
performing key point detection on the actual frame image to obtain a plurality of target key points;
grouping the target key points to obtain target key positions;
matching the target key part with a corresponding key part in the template frame image to obtain the cosine similarity;
and determining the cosine similarity quantization value according to the cosine similarity.
In one embodiment, the determining the cosine similarity quantization value according to the cosine similarity includes:
If the cosine similarity is smaller than 0, determining that the cosine similarity quantization value is 0;
and if the cosine similarity is greater than or equal to 0, determining the cosine similarity as the cosine similarity quantization value.
In one embodiment, the method further comprises:
the analysis result is locally stored or uploaded to a cloud server; and/or
And locally storing or uploading the actual video image to a cloud server.
In one embodiment, before the step of obtaining the template video image in the template video file, the method includes:
downloading the template video file from a cloud server;
or,
and receiving the template video file sent by the terminal.
According to the image display device, the image display method and the image display system, the image display device comprises the single-sided mirror and the display assembly, and the display assembly can acquire and display the template video image in the template video file, and the single-sided mirror can display the real-time image of the object to be displayed, so that the real-time image of the object to be displayed and the template video image can be displayed in the same area through overlapping the single-sided mirror, the situation that the traditional image display device is inconvenient to use caused by displaying the real-time image of the object to be displayed and the template video image in different areas respectively is avoided, in the use process, a user can see the template video image and the real-time image of the user at the same time, the head does not need to swing back and forth or the gaze is continuously switched, the use portability is greatly improved, the attention is better, the coordination and the accuracy of the actions are improved, and the exercise effect is greatly improved.
Drawings
FIG. 1 is a schematic diagram of an image display device according to an embodiment;
FIG. 2 is a schematic diagram of an image display device according to another embodiment;
FIG. 2a is a schematic diagram of an embodiment of a distance determining method for an image display device and a user;
FIG. 3 is a schematic diagram of an image display device according to another embodiment;
FIG. 4 is a schematic diagram of an image display device according to another embodiment;
FIG. 5 is a schematic diagram of an image display system according to another embodiment;
FIG. 6 is a flowchart of an image display method according to an embodiment;
FIG. 7 is a flowchart of an image display method according to another embodiment;
FIG. 8 is a flowchart of a method for displaying images according to another embodiment;
FIG. 9 is a flowchart of a method for displaying images according to another embodiment;
FIG. 10 is a flowchart of a method for displaying images according to another embodiment;
FIG. 11 is a flowchart illustrating a method for displaying images according to another embodiment;
FIG. 11a is a schematic diagram of key points of a human body according to one embodiment;
FIG. 12 is a flowchart of a method for displaying images according to another embodiment;
Fig. 13 is an internal structural view of a computer device in one embodiment.
Reference numerals illustrate:
single-sided mirror: 100; and a display assembly: 200;
a data acquisition unit: 300; and a processing unit: 400;
and a storage unit: 500; a communication unit: 600.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The following describes the technical solution of the present application and how the technical solution of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of an image display device according to an embodiment. As shown in fig. 1, the apparatus includes: the single-sided mirror 100 and the display assembly 200, the single-sided mirror 100 is overlapped on the front surface of the display assembly 200; the display component 200 is used for acquiring the template video image in the template video file and displaying the template video image; the single-sided mirror 100 is used for displaying real-time images of objects to be displayed.
Specifically, the image display apparatus may include a single-sided mirror 100 and a display assembly 200. The single-sided mirror 100 may be a single-sided perspective glass, that is, a thin film of chromium, aluminum, iridium or silver is attached to a common glass by vacuum coating. The transmittance of the glass can be controlled by the thickness and the density of the film, so that the semi-transparent and semi-reflective effects are obtained. The display of real-time images of a display object, such as a person being trained, i.e., a "mirror," is achieved by the single-sided mirror 100. In addition, the display component 200 may obtain a template video image, such as a teaching video image, in the last video file to be referred to, and display the template video image through a display screen in the display component 200. Since the single-sided mirror 100 is overlapped on the front surface of the display assembly 200, the single-sided mirror 100 and the display assembly 200 can display the object to be displayed and the template video image in the same area, respectively.
In this embodiment, since the image display device includes the single-sided mirror and the display component, and the display component can acquire and display the template video image in the template video file, and the single-sided mirror can display the real-time image of the object to be displayed, the single-sided mirror is overlapped and arranged on the front surface of the display component, so that the real-time image of the object to be displayed and the template video image can be displayed in the same area, which avoids the inconvenient use caused by that the traditional image display device displays the real-time image of the object to be displayed and the template video image in different areas respectively. In the use, the user can see template video image and its own real-time image simultaneously, need not back and forth swing head or constantly switch the gaze, and portability that consequently uses improves greatly, and then concentration that can be better for the harmony and the accuracy of its action improve, and the exercise effect promotes greatly.
Optionally, based on the above embodiment, the display assembly 200 may also be a touch screen or an induction screen, and the display assembly 200 may receive the size adjustment information input by the user, so that the size of the template video image can be adjusted according to the size adjustment information. For example, the display assembly 200 may receive sliding of a thumb and an index finger of a user through a touch pad disposed above a screen thereof, thereby scaling the size of the template video image such that the size of the portrait in the template video image as a reference meets the use requirement of the user. In this embodiment, the device receives the size adjustment information input by the user through the display component, and the size of the template video image can be adjusted based on the size adjustment information, so that the size of the portrait serving as the reference standard in the template video image is equivalent to the size of the real-time image of the object to be displayed, thereby facilitating the user to more conveniently and intuitively compare the motion consistency of the user and the teacher, further improving the coordination and accuracy of the motion, and further improving the training effect.
Fig. 2 is a schematic structural diagram of an image display device according to another embodiment. On the basis of the above embodiments, as shown in fig. 2, the apparatus may further include a data acquisition unit 300 and a processing unit 400, where the data acquisition unit 300 is connected to the processing unit 400; the processing unit 400 in fig. 2 is generally disposed inside the display assembly 200, and the location in fig. 2 is shown by way of example only and not as a limitation of the present embodiment. The data acquisition unit 300 is configured to acquire an actual video image of the object to be displayed; the processing unit 400 is configured to perform key frame extraction on the actual video image to obtain an actual frame image, and compare the actual frame image with a template frame image in the template video image to obtain an analysis result of the actual frame image; the analysis result is used for representing the similarity degree of the object gestures in the actual frame image and the template frame image; the template frame image is an image obtained by extracting a key frame from the template video image.
Specifically, the apparatus may further include a data acquisition unit 300 and a processing unit 400, where the data acquisition unit 300 and the processing unit 400 are connected, and the connection manner of the two is not limited in this embodiment, and for example, the two may be electrical connection or communication connection. When the two are electrically connected, the two are connected through a data wire or arranged on the same printed board, and are connected through wiring of the printed board, and can be packaged into the same chip through an integrated process; when the two are in communication connection, the communication modes can be 3g,4g,5g or short-distance wireless communication modes, which is not limited in this embodiment. Alternatively, the data acquisition unit 300 may be an image capturing device, which acquires an actual video image of the object to be displayed by capturing an image of the object to be displayed; optionally, the data acquisition unit 300 may also be a three-dimensional human body sensing module, which acquires an actual video image of the object to be displayed by acquiring three-dimensional point cloud data of the object to be displayed, where the three-dimensional human body sensing module includes, but is not limited to, a three-dimensional human body sensing sensor based on the principles of Time of Flight (TOF) and Structured light (Structured-light) and a related sensing signal processing module. The processing unit 400 is configured to perform key frame extraction on an actual video image to obtain an actual frame image, and compare the actual frame image with a template frame image in a template video image to obtain an analysis result of the actual frame image. It should be noted that, the template frame image in the template video image may also be obtained from the template video image by adopting a key frame extraction method. For example, the actual frame image and the template frame image may be obtained by extracting images at the same time in the actual video image and the template video image, respectively. The analysis result can represent the similarity degree of the object gestures in the actual frame image and the template frame image, for example, the similarity degree of the action of an exerciser in the actual frame image and the teaching action in the template frame image. In addition, the specific process of extracting the key frame with respect to the actual video image to obtain an actual frame image, and comparing the actual frame image with the template frame image in the template video image to obtain the analysis result of the actual frame image may also refer to the embodiments shown in fig. 9 to 12 below, which are not described herein again.
Optionally, either or both of the above data acquisition unit and processing unit may also be a data acquisition unit and/or processing unit of the user terminal. For example, the user terminal shoots an actual video image of the object to be displayed through a data acquisition unit, such as a camera, on the user terminal, and then sends the actual video image to a processing unit for processing, so that an analysis result is obtained. Alternatively, the processing unit may be a processing unit on the user terminal. Optionally, when the data acquisition unit is a data acquisition unit on the user terminal, in a use process, the user terminal may be disposed above the display assembly or on a support at the bottom end, so as to be convenient for shooting a user. In this embodiment, the analysis result is obtained by using the data acquisition unit and/or the processing unit on the user terminal, which may not need to set the data acquisition unit and/or the processing unit separately, thereby greatly reducing the cost of the image display device.
Optionally, the user terminal may further receive size adjustment information input by a user through a display screen of the user terminal, so as to adjust a size of the template video image, so that the display component displays the adjusted size of the template video image.
In this embodiment, the image display device may further include a data acquisition unit and a processing unit, and the processing unit is connected to the data acquisition unit to implement key frame extraction of the actual video image acquired by the data acquisition unit, so as to obtain an actual frame image, and then the processing unit compares the actual frame image with a template frame image in the template video image to obtain an analysis result of the actual frame image. The template frame image is an image obtained by extracting a key frame from a template video image, and the analysis result can represent the similarity degree of the object gestures in the actual frame image and the template frame image, so that a user can know the motion accuracy of the user based on the analysis result, the user can know the exercise effect of the user more conveniently and intuitively according to the analysis result, the functions of the device are enriched, and the use of the device is more convenient.
Optionally, the field of view of the camera device should cover the whole body of the user, which is set in relation to the height of the user and the distance of the camera device from the user. For example, see fig. 2a, where H is the height of the user, L is the distance between the person and the camera, θ is the limiting view angle of the camera, and C in fig. 2a represents the position of the camera. Then there is the following relationship: θ=2×atan (0.5H/L), and the corresponding relationship between the different H, L combinations and the camera shooting angles can be obtained through calculation, as shown in table 1.
TABLE 1
H (Rice) 1.5 1.7 1.9 1.5 1.7 1.9 1.5 1.7 1.9
L (Rice) 1 1 1 1.5 1.5 1.5 2 2 2
θ (degree) 74 81 87 53 59 65 41 46 51
Typically, the actual field of view of the camera is greater than 65 degrees, so a distance of 1.5 meters to 2 meters from the camera may be recommended for the user.
Fig. 3 is a schematic structural diagram of an image display device according to another embodiment. Optionally, on the basis of the foregoing embodiments, the apparatus may further include a storage unit 500, where the storage unit 500 is configured to store the analysis result and/or the actual video image. The storage unit 500 of fig. 3 is generally disposed inside the display assembly 200, and is shown in fig. 3 by way of example only and not as a limitation of the present embodiment. Specifically, the image display device may further include a storage unit 500, where the storage unit 500 may be connected to the data acquisition unit 300 and/or the processing unit 400, and may store the actual video image acquired by the data acquisition unit 300, and may store the analysis result determined by the processing unit 400. In this embodiment, the storage unit stores the analysis result and/or the actual video image, so that the user can check and count the user data conveniently, the functions of the device are richer, and various use requirements of people are met.
Alternatively, on the basis of the embodiment shown in fig. 3, the apparatus may further include a communication unit 600, which is generally disposed inside the display assembly 200, as shown in fig. 4 by way of example only and not by way of limitation. The communication unit 600 may be configured to send an analysis result and/or an actual video image to the cloud server, so that the cloud server stores the analysis result and/or the actual video image so that a user can view the analysis result and/or the actual video image by accessing the cloud server anytime and anywhere, so that the device has more abundant functions, meets various use requirements of people, and in addition, the cloud server can perform data statistics on the analysis result and/or the actual video image, thereby providing more customized services for different users. Optionally, the communication unit 600 may be further configured to download the template video file from the cloud server, or receive the template video file from the user terminal. Specifically, the template video file can be downloaded by the cloud server through the network, and the template video file sent by the user terminal can also be received, so that the source mode of the template video file is diversified, the template video file is not required to be limited to a fixed acquisition mode, and the portability of the user is greatly improved.
In an embodiment, an image display system is further provided, and particularly referring to fig. 5, the image display system includes a cloud server, a user terminal, and the image display device according to any one of the embodiments. The image display device can be used alone, can also be matched with a user terminal or connected with a cloud server for use.
The image display device and the system to which the image display device is applied are described above, and the image display method according to the present application will be described in detail below.
The image display method provided by the application can be applied to the device shown in fig. 1. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of a portion of the structure associated with the present application and is not intended to limit the image display apparatus to which the present application is applied, and that a particular image display apparatus may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
The execution subject of the method embodiment described below may be an image display device, which may be implemented as part or all of the image display system by software, hardware, or a combination of software and hardware. In the following method embodiments, the execution subject is an image display device or a component in the image display device will be described as an example.
Fig. 6 is a flowchart of an image display method according to an embodiment. The embodiment relates to a specific process of displaying a template video image and a real-time image in the same area by an image display device. As shown in fig. 6, the method includes:
s101, obtaining a template video image in a template video file.
Specifically, the image display device may acquire the template video image in the template video file, alternatively, may read the template video file stored in the storage unit thereof, and decode the template video image, or may receive the template video file sent by other devices, and decode the template video file, where the specific manner of acquiring the template video image in the template video file by the image display device is not limited.
And S102, displaying the template video image and the real-time image of the object to be displayed in the same area.
Specifically, the image display device displays the real-time image of the object to be displayed through the single-sided mirror, and displays the template video image in the template video file in the same area through the display component. The detailed description of the single-sided mirror and the display assembly may be referred to the foregoing, and will not be repeated herein.
In this embodiment, since the image display device can obtain the template video image in the template video file and display the template video image and the real-time image of the object to be displayed in the same area, the situation of inconvenient use caused by that the traditional image display device displays the real-time image of the object to be displayed and the template video image in different areas respectively is avoided, in the use process, the user can see the template video image and the real-time image of the user at the same time, the head does not need to swing back and forth or constantly switch eyes, so that the portability of the use is greatly improved, and further the attention is better concentrated, so that the coordination and accuracy of the action of the user are improved, and the exercise effect is greatly improved.
Fig. 7 is a flowchart of an image display method according to another embodiment. Optionally, on the basis of the embodiment shown in fig. 6, the method may further include:
s201, receiving size adjustment information input by a user.
S202, adjusting the size of the template video image according to the size adjustment information.
Fig. 8 is a flowchart of an image display method according to another embodiment. Optionally, on the basis of the embodiment shown in fig. 6 or fig. 7, the method may further include:
S301, acquiring an actual video image of the object to be displayed.
S302, extracting key frames of the actual video images to obtain actual frame images.
S303, comparing the actual frame image with a template frame image in the template video image to obtain an analysis result of the actual frame image; the analysis result is used for representing the similarity degree of the object gestures in the actual frame image and the template frame image; the template frame image is an image obtained by extracting a key frame from the template video image.
Fig. 9 is a flowchart of an image display method according to another embodiment. Alternatively, on the basis of the embodiment shown in fig. 8, the analysis result includes a total quantization value of the degree of similarity of the actual frame image and the template frame image; one possible form of step S303 described above may include:
s401, obtaining moment feature vector quantized values of the actual frame image and the template frame image.
Specifically, the image display device can analyze the objects in the actual frame image and the template frame image, so as to calculate and obtain moment feature vector quantized values of the actual frame image and the template frame image, and the moment feature vector quantized values can represent the similarity degree of actions of the objects in the actual frame image and the template frame image.
Alternatively, one possible implementation of this step S401 may be shown in fig. 10 described below, and will not be described here.
S402, obtaining a cosine similarity quantization value of the actual frame image and the template frame image.
Specifically, the image display device can analyze the objects in the actual frame image and the template frame image, so as to calculate and obtain cosine similarity quantized values of the actual frame image and the template frame image, and the cosine similarity quantized values can represent the similarity degree of the actions of the objects in the actual frame image and the template frame image from the dimension of the vector angle.
Alternatively, one possible implementation of this step S402 may be shown in fig. 11 described below, and will not be described here.
S403, determining the total quantization value according to a preset quantization value weight coefficient, the moment feature vector quantization value and the cosine similarity quantization value.
Specifically, the image display device can determine a total quantization value capable of representing the total similarity degree of the object in the actual frame image and the template frame image according to a preset quantization value weight coefficient, a moment feature vector quantization value and a cosine similarity quantization value. The quantized value weight coefficient is a weight coefficient of each of the moment feature vector quantized value and the cosine similarity quantized value.
Alternatively, one possible implementation of this step S403 may include: normalizing the moment feature vector quantized value and the cosine similarity quantized value, and multiplying the normalized moment feature vector quantized value and the cosine similarity quantized value by corresponding quantized value weight coefficients respectively to obtain the total quantized value. Specifically, the image display device can normalize the quantized values of the moment feature vector and the quantized values of the cosine similarity respectively, multiply and sum the quantized values with the quantized value weight coefficients corresponding to the quantized values respectively, and thus obtain a total quantized value representing the similarity degree of the object in the actual frame image and the template frame image. For example, the total quantization value S in this step can be represented by the formula s= [ b ] 1 ,b 2 ]*[p,k] T Or the deformation of the formula is obtained, wherein b1 and b2 are respectively quantized value weight coefficients corresponding to a moment characteristic vector quantized value and a cosine similarity quantized value, p is the moment characteristic vector quantized value of the actual frame image, k is the cosine similarity quantized value of the actual frame image, and b 1 +b 2 =1. Alternatively, the quantization value weight coefficient may be adjusted according to the degree of matching. For example, if the influence of the angle of the action on the score is ignored, the quantized value weight coefficient corresponding to the cosine similarity quantized value may be set to 0. In the implementation manner, the image display device can normalize the rectangular eigenvector quantized value and the cosine similarity quantized value, and multiply the rectangular eigenvector quantized value and the cosine similarity quantized value with the corresponding quantized value weight coefficients respectively to obtain the total quantized value, so that the proportion of the rectangular eigenvector quantized value and the cosine similarity quantized value occupied by the two dimensions in the scoring process is controlled according to the quantized value weight system, and the obtained total quantized value is more reasonable and accurate and meets the user requirements.
In this embodiment, the image display device may obtain the moment feature vector quantization values of the actual frame image and the template frame image, and obtain the cosine similarity quantization values of the actual frame image and the template frame image, and then determine the total quantization value according to the preset quantization value weight coefficient, the moment feature vector quantization value and the cosine similarity quantization value, so as to comprehensively and objectively perform quantization scoring on the postures of the objects in the actual frame image and the template frame image from two aspects of moment feature vector and cosine similarity, and further make the obtained total quantization value more reasonable, accurate and meet the user requirements.
Fig. 10 is a flowchart of an image display method according to another embodiment. The embodiment relates to a specific process of obtaining moment feature vector quantization values of an actual frame image and a template frame image by an image display device. Alternatively, as shown in fig. 10, the step S401 may specifically include:
s501, dividing the human images of the actual frame images to obtain target human images.
Specifically, the image display device may divide the person in the actual frame image from the background. For a simpler background, the segmentation can be performed by using an inter-frame difference algorithm; for a more complex background, a trained deep neural network model with a semantic segmentation function can be used for segmentation. The image-based deep neural network model with the semantic segmentation function can include, but is not limited to, a full convolutional network (Fully Convolutional Network, FCN for short), a template region convolutional neural network (Mask R-CNN) and the like.
S502, extracting the outline of the target portrait to obtain a target shape.
Specifically, in order to avoid interference of colors and textures on a processing result, the image display device performs contour detection on the segmented target portrait, and fills a closed area formed by contours, thereby obtaining a target shape.
S503, extracting shape characteristics of the target shape to obtain the target geometric invariant moment.
Specifically, the image display device performs shape feature extraction on the target shape, so as to obtain the target geometric invariant moment of the actual frame image. The geometric invariant moment is also called a feature vector (Hu moment, zernike moment, etc. are commonly used), because the geometric invariant moment has invariant features of rotation, translation, scale, etc. characteristics.
S504, matching the target geometric invariant moment with the geometric invariant moment of the object in the template frame image to obtain the moment feature vector distance between the actual frame image and the template frame image.
In general, the template frame image may be processed by the method of steps S501 to S503 described above, so that the geometric invariant of the object in the template frame image is obtained. And then, the image display device matches the geometric invariant moment of the object in the actual frame image with the geometric invariant moment of the object in the template frame image to obtain the moment feature vector distance between the actual frame image and the template frame image. It should be noted that, the smaller the distance between the vectors and the smaller the difference value of the corresponding moment eigenvalues of the same order, the closer the shape of the object depicted by the two images.
S505, determining the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and a preset moment feature vector range.
Specifically, the image display device may define a predetermined moment feature vector range, and determine the moment feature vector quantization value of the actual frame image according to whether the moment feature vector distance is within the moment feature vector range or not and greater than or less than the moment feature vector range.
Alternatively, one possible implementation of this step S505 may be as shown in fig. 12, including:
S601A, if the distance between the moment feature vectors is smaller than or equal to the minimum value of the preset moment feature vector range, determining the quantized value of the moment feature vector as a first value, optionally, the first value may be 1, which may represent that the similarity is very high.
And S601B, if the moment feature vector distance is larger than the minimum value of the moment feature vector range and smaller than the maximum value of the moment feature vector range, calculating the moment feature vector quantized value according to the moment feature vector distance, the minimum value of the moment feature vector range and the maximum value of the moment feature vector range.
Optionally, aThis step can be performed by following the formula
Figure BDA0002046296850000181
Or the deformation of the formula determines a moment eigenvector quantization value p, where L is the moment eigenvector distance, emin is the minimum value of the moment eigenvector range, and emax is the maximum value of the moment eigenvector range. The moment characteristic vector quantized value is characterized by the formula, so that the calculation is convenient and accurate.
S601C, if the moment feature vector distance is greater than or equal to the maximum value of the moment feature vector range, determining the moment feature vector quantization value as a second value, optionally, the second value may be 0, which may represent that the similarity is very low.
Alternatively, this step may be formulated
Figure BDA0002046296850000182
Or a variation of the formula, wherein 0<p<1。
In this embodiment, the image display device may divide the actual frame image into the target image, perform contour extraction on the target image to obtain the target shape, then perform shape feature extraction on the target shape to obtain the target geometric invariant moment, further match the target geometric invariant moment with the geometric invariant moment of the object in the template frame image to obtain the moment feature vector distance between the actual frame image and the template frame image, and finally determine the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and the preset moment feature vector range, thereby implementing quantization on the moment feature vector of the actual frame image, so that the user may more intuitively learn the moment feature vector, and use the moment feature vector is more convenient.
Fig. 11 is a flowchart of an image display method according to another embodiment. The embodiment relates to a specific process of acquiring cosine similarity quantization values of an actual frame image and a template frame image by an image display device. Alternatively, as shown in fig. 11, S402 may specifically include:
and S701, performing key point detection on the actual frame image to obtain a plurality of target key points.
Specifically, the image display device may detect key points of the actual frame image to obtain a plurality of target key points, where the detected key points are usually a predefined sequence set based on the positions of corresponding points of the skeletal joints of the human body in the image, so that the plurality of target key points may represent a certain action of a certain part of the object.
S702, grouping the target key points to obtain target key positions.
For many movements, only the accuracy of the movements of the part of the body needs to be of interest. Therefore, the image display device can group the key points, extract the key points of the part to be focused and analyze the key points. For example, as shown in fig. 11a, if a certain action only needs to pay attention to the action of the right hand, only the key points 12, 13, 14,2 and 11 are used for analysis, so that the target key part, namely the right hand, can be obtained.
S703, matching the target key part with the corresponding key part in the template frame image to obtain the cosine similarity.
Firstly, combining the obtained target key points of the target key positions into aN aN 1 vector according to a certain sequence, wherein N is the number of key points participating in analysis, a is the coordinate dimension (two-dimensional=2, three-dimensional=3), then matching the target key positions with the corresponding key positions in the template frame image, and calculating the cosine similarity (namely cosine distance) between the vector formed by the key points of the template frame image and the vector formed by the key points of the user image. It should be noted that, the cosine similarity between the vectors can reflect whether the directions of the two vectors are identical, and is independent of the size of the vectors. During exercise, different persons have different distances between key points due to different body sizes, but when evaluating the accuracy of the movements, it is mainly seen whether the angles between the relevant body parts are consistent with the standard movements.
S704, determining the cosine similarity quantization value according to the cosine similarity.
Specifically, the image display device may determine the cosine similarity quantization value according to the cosine similarity.
Because the value range of cosine similarity is [ -1,1]Therefore, when the cosine similarity is smaller than 0, the cosine similarity quantization value is determined to be 0; when the cosine similarity is greater than or equal to 0, the cosine similarity is directly determined as a cosine similarity quantization value. For example the cosine similarity quantization value can be calculated by the formula,
Figure BDA0002046296850000201
or a variation of the formula.
For example, the cosine similarity D= { D of one action is calculated i I=1, 2,.. i Is the cosine similarity of an item. For example, for a certain action, it is necessary to calculate the accuracy of different parts at the same time, and a vector composed of coordinates of a set of key points of each part is an item. Since the cosine similarity is in the range of [ -1,1]The closer the cosine similarity is to 1, the closer the included angle is to 0, i.e., the more similar the two vectors are; the closer the cosine similarity is to-1, the closer the angle between the two vectors is to 180 degrees, and-1 represents the direction is completely opposite. We can process D as follows to obtain the cosine similarity quantization value k= { K for each item i And } wherein,
Figure BDA0002046296850000202
wherein 0 is<k i <1. Finally, synthesizing cosine similarity quantized values of all the items to obtain a cosine similarity quantized value k=a×k of the matching degree of the key part, wherein a= [ a ] 1 ,a 2 ,...,a n ]And->
Figure BDA0002046296850000203
The coefficient a is the weight of the cosine similarity quantized value of each part, and can be adjusted according to the requirement.
In this embodiment, the key point detection is performed on the actual frame image to obtain a plurality of target key points, then the plurality of target key points are grouped to obtain target key positions, then the corresponding key positions in the target key positions and the template frame image are matched to obtain cosine similarity, and further the cosine similarity quantization value is determined according to the cosine similarity, so that the user can intuitively know the accuracy of the motion according to the quantized cosine similarity quantization value, and the use is more convenient.
Optionally, on the basis of the foregoing embodiment, the method further includes: the analysis result is locally stored or uploaded to a cloud server; and/or, the actual video image is locally stored or uploaded to a cloud server.
Optionally, before the step of obtaining the template video image in the template video file, the method includes: downloading the template video file from a cloud server; or receiving the template video file sent by the terminal.
In the above image display method, reference may be made to the specific description of the above image display device for its principle and technical effects, which are not repeated herein.
It should be understood that, although the steps in the flowcharts of fig. 6-12 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 6-12 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or phases of other steps or other steps.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 13. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image display method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 13 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the embodiments described above when the computer program is executed by the processor. Specifically, the processor, when executing the above computer program, implements the following steps:
acquiring a template video image in a template video file;
and displaying the template video image and the real-time image of the object to be displayed in the same area.
It should be clear that the process of executing the computer program by the processor in the embodiment of the present application is consistent with the execution of each step in the above method, and specific reference may be made to the foregoing description.
In one embodiment, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the embodiments described above. In particular, the computer program when executed by the processor performs the steps of:
Acquiring a template video image in a template video file;
and displaying the template video image and the real-time image of the object to be displayed in the same area.
It should be clear that the process of executing the computer program by the processor in the embodiments of the present application corresponds to the execution of each step in the above method, and specific reference may be made to the above description.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. An image display device, the device comprising: the display device comprises a single-sided mirror, a display assembly, a data acquisition unit and a processing unit, wherein the single-sided mirror is overlapped on the front surface of the display assembly; the data acquisition unit is connected with the processing unit;
the display component is used for acquiring the template video image in the template video file and displaying the template video image;
The single-sided mirror is used for displaying the real-time image of the object to be displayed;
the data acquisition unit is used for acquiring the actual video image of the object to be displayed;
the processing unit is used for extracting key frames of the actual video images to obtain actual frame images, and comparing the actual frame images with template frame images in the template video images to obtain analysis results of the actual frame images; the analysis result is used for representing the similarity degree of the object gestures in the actual frame image and the template frame image; the template frame image is an image obtained by extracting a key frame from the template video image;
the analysis result comprises a total quantification value of the similarity degree of the actual frame image and the template frame image; the processing unit is specifically configured to obtain a moment feature vector quantization value of the actual frame image and the template frame image; acquiring a cosine similarity quantization value of the actual frame image and the template frame image; determining the total quantization value according to a preset quantization value weight coefficient, the moment feature vector quantization value and the cosine similarity quantization value;
The obtaining the moment feature vector quantization values of the actual frame image and the template frame image includes:
performing portrait segmentation on the actual frame image by using an inter-frame difference algorithm or a deep neural network model with a semantic segmentation function to obtain a target portrait;
extracting the outline of the target portrait to obtain a target shape;
extracting shape characteristics of the target shape to obtain a target geometric invariant moment;
matching the target geometric invariant moment with the geometric invariant moment of the object in the template frame image to obtain a moment feature vector distance between the actual frame image and the template frame image;
determining the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and a preset moment feature vector range;
the obtaining the cosine similarity quantization value of the actual frame image and the template frame image includes:
performing key point detection on the actual frame image to obtain a plurality of target key points;
grouping the target key points to obtain target key positions;
matching the target key part with a corresponding key part in the template frame image to obtain the cosine similarity;
And determining the cosine similarity quantization value according to the cosine similarity.
2. The apparatus of claim 1, wherein the display component is further configured to receive user-entered resizing information, the resizing information being used to resize the template video image.
3. The apparatus of claim 1, further comprising a communication unit;
the communication unit is used for sending the analysis result and/or the actual video image to a cloud server;
and the communication unit is used for downloading the template video file from the cloud server or receiving the template video file from a user terminal.
4. A device according to claim 3, characterized in that the data acquisition unit and/or the processing unit is a data acquisition unit and/or a processing unit provided on a user terminal;
when the data acquisition unit is arranged on the user terminal, the communication unit is also used for receiving the actual video image acquired by the data acquisition unit.
5. An image display system comprising the image display device according to any one of claims 1 to 4.
6. An image display method, applied to the apparatus of claim 1, comprising:
acquiring a template video image in a template video file;
displaying the template video image and the real-time image of the object to be displayed in the same area;
acquiring an actual video image of the object to be displayed;
extracting key frames from the actual video images to obtain actual frame images;
comparing the actual frame image with a template frame image in the template video image to obtain an analysis result of the actual frame image; the analysis result is used for representing the similarity degree of the object gestures in the actual frame image and the template frame image; the template frame image is an image obtained by extracting a key frame from the template video image;
wherein the analysis result comprises a total quantization value of the similarity degree of the actual frame image and the template frame image; comparing the actual frame image with the template frame image in the template video image to obtain an analysis result of the actual frame image, wherein the analysis result comprises the following steps:
acquiring moment feature vector quantization values of the actual frame image and the template frame image;
Acquiring a cosine similarity quantization value of the actual frame image and the template frame image;
determining the total quantization value according to a preset quantization value weight coefficient, the moment feature vector quantization value and the cosine similarity quantization value;
the obtaining the moment feature vector quantization values of the actual frame image and the template frame image includes:
performing portrait segmentation on the actual frame image by using an inter-frame difference algorithm or a deep neural network model with a semantic segmentation function to obtain a target portrait;
extracting the outline of the target portrait to obtain a target shape;
extracting shape characteristics of the target shape to obtain a target geometric invariant moment;
matching the target geometric invariant moment with the geometric invariant moment of the object in the template frame image to obtain a moment feature vector distance between the actual frame image and the template frame image;
determining the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and a preset moment feature vector range;
the obtaining the cosine similarity quantization value of the actual frame image and the template frame image includes:
performing key point detection on the actual frame image to obtain a plurality of target key points;
Grouping the target key points to obtain target key positions;
matching the target key part with a corresponding key part in the template frame image to obtain the cosine similarity;
and determining the cosine similarity quantization value according to the cosine similarity.
7. The method of claim 6, wherein the method further comprises:
receiving size adjustment information input by a user;
and adjusting the size of the template video image according to the size adjustment information.
8. The method of claim 6, wherein the determining the total quantization value based on a preset quantization value weight coefficient, the moment eigenvector quantization value, and the cosine similarity quantization value comprises:
normalizing the moment feature vector quantized value and the cosine similarity quantized value, and multiplying the normalized moment feature vector quantized value and the cosine similarity quantized value by corresponding quantized value weight coefficients respectively to obtain the total quantized value.
9. The method according to any one of claims 6-8, wherein determining the moment feature vector quantization value of the actual frame image according to the moment feature vector distance and a preset moment feature vector range comprises:
If the distance between the moment feature vectors is smaller than or equal to the minimum value of the preset moment feature vector range, determining the quantized value of the moment feature vectors as a first value;
if the moment feature vector distance is larger than the minimum value of the moment feature vector range and smaller than the maximum value of the moment feature vector range, calculating the moment feature vector quantized value according to the moment feature vector distance, the minimum value of the moment feature vector range and the maximum value of the moment feature vector range;
and if the moment characteristic vector distance is greater than or equal to the maximum value of the moment characteristic vector range, determining the moment characteristic vector quantized value as a second value.
10. The method of claim 9, wherein the computing the moment feature vector quantization value based on the moment feature vector distance, the minimum value of the moment feature vector range, and the maximum value of the moment feature vector range comprises:
according to the formula
Figure FDA0004054020760000051
And determining a moment characteristic vector quantization value p, wherein L is a moment characteristic vector distance, emin is the minimum value of a moment characteristic vector range, and emax is the maximum value of the moment characteristic vector range. />
CN201910358971.6A 2019-04-30 2019-04-30 Image display device, method and system Active CN110244925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910358971.6A CN110244925B (en) 2019-04-30 2019-04-30 Image display device, method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910358971.6A CN110244925B (en) 2019-04-30 2019-04-30 Image display device, method and system

Publications (2)

Publication Number Publication Date
CN110244925A CN110244925A (en) 2019-09-17
CN110244925B true CN110244925B (en) 2023-05-09

Family

ID=67883483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910358971.6A Active CN110244925B (en) 2019-04-30 2019-04-30 Image display device, method and system

Country Status (1)

Country Link
CN (1) CN110244925B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222482A (en) * 2020-01-14 2020-06-02 武汉幻视智能科技有限公司 Hidden face snapshot device and method for residence

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610715A (en) * 2007-02-14 2009-12-23 皇家飞利浦电子股份有限公司 Be used to guide and the feedback device of supervising physical exercises
EP3333771A1 (en) * 2016-12-09 2018-06-13 Fujitsu Limited Method, program, and apparatus for comparing data hypergraphs

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI582710B (en) * 2015-11-18 2017-05-11 Bravo Ideas Digital Co Ltd The method of recognizing the object of moving image and the interactive film establishment method of automatically intercepting target image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610715A (en) * 2007-02-14 2009-12-23 皇家飞利浦电子股份有限公司 Be used to guide and the feedback device of supervising physical exercises
EP3333771A1 (en) * 2016-12-09 2018-06-13 Fujitsu Limited Method, program, and apparatus for comparing data hypergraphs

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于目标区域特征的反馈式图像检索算法;郭爽;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150731;正文第27-31页 *

Also Published As

Publication number Publication date
CN110244925A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
US20210192758A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
WO2020103647A1 (en) Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
CN109214282B (en) A kind of three-dimension gesture critical point detection method and system neural network based
US10671156B2 (en) Electronic apparatus operated by head movement and operation method thereof
CN110998659B (en) Image processing system, image processing method, and program
US11398044B2 (en) Method for face modeling and related products
CN110032271B (en) Contrast adjusting device and method, virtual reality equipment and storage medium
CN110363867B (en) Virtual decorating system, method, device and medium
US10121273B2 (en) Real-time reconstruction of the human body and automated avatar synthesis
EP2879020B1 (en) Display control method, apparatus, and terminal
US20200065559A1 (en) Generating a video using a video and user image or video
CN107679446A (en) Human face posture detection method, device and storage medium
CN114495241B (en) Image recognition method and device, electronic equipment and storage medium
WO2020223940A1 (en) Posture prediction method, computer device and storage medium
CN111815768A (en) Three-dimensional face reconstruction method and device
CN110244925B (en) Image display device, method and system
WO2017141223A1 (en) Generating a video using a video and user image or video
CN113065458A (en) Voting method and system based on gesture recognition and electronic device
US20230284968A1 (en) System and method for automatic personalized assessment of human body surface conditions
CN116563588A (en) Image clustering method and device, electronic equipment and storage medium
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN116091541A (en) Eye movement tracking method, eye movement tracking device, electronic device, storage medium, and program product
CN111222448B (en) Image conversion method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200414

Address after: 1706, Fangda building, No. 011, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen shuliantianxia Intelligent Technology Co.,Ltd.

Address before: 518051 Shenzhen Nanshan High-tech South District, Shenzhen City, Guangdong Province, No. 6 South Science and Technology 10 Road, Shenzhen Institute of Space Science and Technology Innovation Building, Block D, 10th Floor, 1003

Applicant before: SHENZHEN H & T HOME ONLINE NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant