CN117651170A - Display device, terminal device and image processing method - Google Patents

Display device, terminal device and image processing method Download PDF

Info

Publication number
CN117651170A
CN117651170A CN202310507948.5A CN202310507948A CN117651170A CN 117651170 A CN117651170 A CN 117651170A CN 202310507948 A CN202310507948 A CN 202310507948A CN 117651170 A CN117651170 A CN 117651170A
Authority
CN
China
Prior art keywords
coordinates
target
resolution
human body
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310507948.5A
Other languages
Chinese (zh)
Inventor
刘兆磊
周小萌
冯聪
杨鲁明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202310507948.5A priority Critical patent/CN117651170A/en
Publication of CN117651170A publication Critical patent/CN117651170A/en
Pending legal-status Critical Current

Links

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The disclosure relates to a display device, a terminal device and an image processing method, which are applied to the technical field of intelligent televisions, wherein the display device comprises: the communicator is configured to: receiving a target data packet sent by terminal equipment, wherein the target data packet is generated by the terminal equipment based on target coordinates and preset resolution; the controller is configured to: analyzing the target data packet to obtain a target coordinate and a preset resolution; determining a target proportion based on a preset resolution and a display resolution of the display device; scaling the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area; and drawing an avatar corresponding to the user image of the current frame based on the scaling coordinates of each human body key point. The method can solve the problems of picture delay, blocking and even dead halt caused by insufficient computing power of display equipment in the visual human-computer interaction process when identifying key points of each person in the user image.

Description

Display device, terminal device and image processing method
Technical Field
The embodiment of the application relates to the intelligent television technology. And more particularly, to a display device, a terminal device, and an image processing method.
Background
With the popularization of intelligent display devices, the requirements of visual human-computer interaction, such as fitness heel training, gesture control and the like, appear. However, in the process of visual man-machine interaction, the intelligent display device is required to acquire the user image and identify each human body key point in the user image based on the limb detection algorithm, so that the intelligent display device is required to have stronger computing power and consume more resources.
At present, the main stream of intelligent display equipment mainly plays videos, hardware configuration is generally low, and in the human-computer interaction process, the problems of picture delay, blocking and even dead halt can be caused by insufficient computing power of the intelligent display equipment.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the application provides a display device, which can solve the problems of image delay, blocking and even dead halt caused by insufficient computing power of the display device in the process of visual human-computer interaction when identifying key points of each person in a user image.
In a first aspect, an embodiment of the present application provides a display device, including: a communicator configured to: receiving a target data packet sent by a terminal device, wherein the target data packet is generated by the terminal device based on target coordinates and preset resolution, the target coordinates comprise coordinates of each human body key point in a current frame user image determined by the terminal device through a limb detection algorithm under the preset resolution, and the current frame user image is acquired by the terminal device; a controller configured to: analyzing the target data packet to obtain a target coordinate and a preset resolution; determining a target proportion based on a preset resolution and a display resolution of the display device; scaling the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area; and drawing an avatar corresponding to the user image of the current frame based on the scaling coordinates of each human body key point.
In a second aspect, the present application provides a terminal device, including: an image collector configured to: collecting a user image of a current frame; a controller configured to: determining coordinates of each human body key point in the current frame of user image under a preset resolution through a limb detection algorithm to obtain target coordinates; generating a target data packet based on the target coordinates and a preset resolution; a communicator configured to: and sending a target data packet to the display device, so that the display device determines scaling coordinates of all human body key points for drawing the virtual image corresponding to the current frame user image based on target coordinates and preset resolution obtained by analyzing the target data packet, wherein the scaling coordinates of all human body key points are obtained by scaling the target coordinates according to the target scale, the scaling coordinates of all human body key points are all located in a preset display area of the display device, and the target scale is obtained based on the preset resolution and the display resolution of the display device.
In a third aspect, there is provided an image processing method applied to a display device, the method comprising: receiving a target data packet sent by a terminal device, wherein the target data packet is generated by the terminal device based on target coordinates and preset resolution, the target coordinates comprise coordinates of each human body key point in a current frame user image determined by the terminal device through a limb detection algorithm under the preset resolution, and the current frame user image is acquired by the terminal device; analyzing the target data packet to obtain a target coordinate and a preset resolution; determining a target proportion based on a preset resolution and a display resolution of the display device; scaling the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area; and drawing an avatar corresponding to the user image of the current frame based on the scaling coordinates of each human body key point.
In a fourth aspect, there is provided an image processing method applied to a terminal device, the method including: collecting a user image of a current frame; determining coordinates of each human body key point in the current frame of user image under a preset resolution through a limb detection algorithm to obtain target coordinates; generating a target data packet based on the target coordinates and a preset resolution; and sending a target data packet to the display device, so that the display device determines scaling coordinates of all human body key points for drawing the virtual image corresponding to the current frame user image based on target coordinates and preset resolution obtained by analyzing the target data packet, wherein the scaling coordinates of all human body key points are obtained by scaling the target coordinates according to the target scale, the scaling coordinates of all human body key points are all located in a preset display area of the display device, and the target scale is obtained based on the preset resolution and the display resolution of the display device.
In a fifth aspect, the present application provides a computer-readable storage medium comprising: the computer-readable storage medium stores thereon a computer program which, when executed by a processor, implements the image processing method as shown in the third aspect or the fourth aspect.
In a sixth aspect, the present application provides a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the image processing method as shown in the third or fourth aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: in the embodiment of the application, a terminal device acquires a current frame user image, determines coordinates of each human body key point in the current frame user image under a preset resolution through a limb detection algorithm to obtain a target coordinate, generates a target data packet based on the target coordinate and the preset resolution, and sends the target data packet to a display device; the display equipment receives a target data packet sent by the terminal equipment, analyzes the target data packet to obtain target coordinates and preset resolution, determines a target proportion based on the preset resolution and the display resolution of the display equipment, performs scaling processing on the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area, and draws an avatar corresponding to a current frame of user image based on the scaled coordinates of each human body key point. In the scheme, the identification process of each human body key point with higher calculation force requirement is carried out by the terminal equipment, the display equipment only needs to analyze the data packet, the coordinate is drawn after being scaled, and the calculation force requirement on the display equipment is lower. Therefore, the advantage of large-screen display of the display device and the advantage of stronger computing power of the terminal device are combined, so that a user can smoothly interact with the display device through the large-screen of the display device, and the man-machine interaction body of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation in the related art, a brief description will be given below of the drawings required for the embodiments or the related art descriptions, and it is apparent that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings for those of ordinary skill in the art.
FIG. 1 illustrates an operational scenario between a control device and a display device according to some embodiments;
fig. 2 shows a hardware configuration block diagram of the control device 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates a schematic diagram of a current frame image acquired by a terminal device according to some embodiments;
FIG. 5 illustrates a schematic diagram of various human keypoints derived from current frame images, in accordance with some embodiments;
FIG. 6 illustrates a schematic diagram of an avatar displayed at a display device according to some embodiments;
FIG. 7 illustrates a schematic diagram of user images at different placement angles displayed at a preset resolution, according to some embodiments;
FIG. 8 illustrates one of the flow diagrams of an image processing method according to some embodiments;
FIG. 9 illustrates a second flow diagram of an image processing method according to some embodiments;
FIG. 10 illustrates a third flow diagram of an image processing method in accordance with some embodiments;
FIG. 11 illustrates a fourth flow diagram of an image processing method, according to some embodiments;
FIG. 12 illustrates a fifth flow diagram of an image processing method, according to some embodiments;
fig. 13 illustrates a sixth flow diagram of an image processing method according to some embodiments.
Detailed Description
For purposes of clarity and implementation of the present application, the following description will make clear and complete descriptions of exemplary implementations of the present application with reference to the accompanying drawings in which exemplary implementations of the present application are illustrated, it being apparent that the exemplary implementations described are only some, but not all, of the examples of the present application.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for limiting a particular order or sequence, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided in this embodiment of the present application may have various implementation forms, and for example, may be a television, an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, and the like.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to an embodiment, wherein the control device includes a smart device or a control apparatus. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device may receive instructions not using the smart device or control device described above, but rather receive control of the user by touch or gesture, or the like.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control device configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, an external memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
As shown in fig. 3, the display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a user interface 280, an external memory, and a power supply.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220.
The user interface 280 may be used to receive control signals from the control device 100 (e.g., an infrared remote control, etc.). Or may be used to directly receive user input operation instructions and convert the operation instructions into instructions recognizable and responsive by the display device 200, which may be referred to as a user input interface.
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite type input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on a memory (internal memory or external memory). The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the controller includes at least one of a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), and a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
The RAM is also called as a main memory and is an internal memory for directly exchanging data with the controller. It can be read and written at any time (except when refreshed) and is fast, often as a temporary data storage medium for an operating system or other program in operation. The biggest difference from ROM is the volatility of the data, i.e. the stored data will be lost upon power down. RAM is used in computer and digital systems to temporarily store programs, data, and intermediate results. ROM operates in a non-destructive read mode, and only information which cannot be written can be read. The information is fixed once written, and even if the power supply is turned off, the information is not lost, so the information is also called a fixed memory.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the display device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments of the present application, a display device includes: a communicator, a controller. Corresponding to fig. 3 described above, the communicator 220 and the controller 250 of the display device.
In some embodiments of the present application, there is provided a display device including: a communicator configured to: receiving a target data packet sent by a terminal device, wherein the target data packet is generated by the terminal device based on target coordinates and preset resolution, the target coordinates comprise coordinates of each human body key point in a current frame user image determined by the terminal device through a limb detection algorithm under the preset resolution, and the current frame user image is acquired by the terminal device; a controller configured to: analyzing the target data packet to obtain a target coordinate and a preset resolution; determining a target proportion based on a preset resolution and a display resolution of the display device; scaling the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area; and drawing an avatar corresponding to the user image of the current frame based on the scaling coordinates of each human body key point.
In some embodiments of the present application, there is provided a terminal device including: an image collector configured to: collecting a user image of a current frame; a controller configured to: determining coordinates of each human body key point in the current frame of user image under a preset resolution through a limb detection algorithm to obtain target coordinates; generating a target data packet based on the target coordinates and a preset resolution; a communicator configured to: and sending a target data packet to the display device, so that the display device determines scaling coordinates of all human body key points for drawing the virtual image corresponding to the current frame user image based on target coordinates and preset resolution obtained by analyzing the target data packet, wherein the scaling coordinates of all human body key points are obtained by scaling the target coordinates according to the target scale, the scaling coordinates of all human body key points are all located in a preset display area of the display device, and the target scale is obtained based on the preset resolution and the display resolution of the display device.
It will be appreciated that a connection is established between the terminal device and the display device, and the specific connection may be a wired connection or a wireless connection, and communication is performed through the established connection.
It can be understood that the format of the data packet is a custom format determined by negotiation between the terminal device and the display device.
It can be understood that the terminal device continuously collects the user images, and the collected frequency is determined according to actual needs, and the embodiment of the application is not limited.
It will be appreciated that human body keypoints include: nose keypoints, left eye keypoints, right eye keypoints, mouth keypoints, left ear keypoints, right ear keypoints, left shoulder keypoints, right shoulder keypoints, left elbow keypoints, right elbow keypoints, left wrist keypoints, right wrist keypoints, left crotch keypoints, right crotch keypoints, left knee keypoints, right knee keypoints, left ankle keypoints, right ankle keypoints, and the like.
For example, as shown in fig. 4, a user image of a current frame acquired by a terminal device is input into a limb detection algorithm, so as to obtain key points of the user image of the current frame as shown in fig. 5, and two-dimensional coordinates of key points of each human body: reference numeral 0 indicates a nose key point, coordinates are (x 0, y 0), reference numeral 1 indicates a left eye key point, coordinates are (x 1, y 1), reference numeral 2 indicates a right eye key point, coordinates are (x 2, y 2), reference numeral 3 indicates a left ear key point, coordinates are (x 3, y 3), reference numeral 4 indicates a right ear key point, coordinates are (x 4, y 4), reference numeral 5 indicates a left shoulder key point, coordinates are (x 5, y 5), reference numeral 6 indicates a right shoulder key point, coordinates are (x 6, y 6), reference numeral 7 indicates a left elbow key point, coordinates are (x 7, y 7), reference numeral 8 indicates a right elbow key point, coordinates are (x 8, y 8), reference numeral 9 indicates a left wrist key point, coordinates are (x 9, y 9), reference numeral 10 indicates a right wrist key point, coordinates are (x 10, y 10), reference numeral 11 indicates a left ear key point, coordinates are (x 11, y 11), reference numeral 12 indicates a right crotch key point, coordinates are (x 12, y 12), reference numeral 13 indicates a left knee key point, coordinates are (x 7, y 7), reference numeral 13 indicates a right knee key point, x14, y14 indicates a right wrist key point, x16, and reference numeral 16 indicates an ankle key point. The target coordinates are the coordinates of all human body key points in the identified current frame user image.
It can be understood that the coordinates of each human body key point and the confidence coefficient of each human body key point can be obtained through a limb detection algorithm, the value of the confidence coefficient is generally more than 0 and less than 1, and the larger the numerical value is, the more reliable the identification result is.
It will be appreciated that, in order to reduce the amount of data transmitted, the positions of the key points in the data packet are preset. Such as: the display device receives the target data packet, the first coordinate obtained through analysis is the coordinate of the nose key point, and the second coordinate is the coordinate of the left eye key point. Different limb detection algorithms
It can be understood that, for the detected key points, a confidence threshold is set, if the confidence of a certain key point is smaller than the confidence threshold, the coordinates of the key point are set to be invalid coordinates, such as (-1, -1), and when the subsequent display device draws the key point based on the coordinates, the key point corresponding to the coordinates is not drawn.
It can be understood that the target coordinates are the coordinates of the identified key points of the human body under the preset resolution, and the coordinate system adopted in the application has an origin at the upper left corner of the acquired current frame image, and the minimum unit in the coordinate system is a pixel.
Illustratively, the preset resolution is 1920×1080, i.e., indicates 1920 pixels in the length direction and 1080 pixels in the width direction; the preset resolution is 1080×1920, that is, indicates 1080 pixels in the length direction and 1920 pixels in the width direction; the preset resolution is 1280×720, indicating 1280 pixels in the length direction and 720 pixels in the width direction. If a certain key point coordinate is (10, 20), it indicates that it is 10 pixels apart from the origin in the x-axis and 20 pixels apart from the origin in the y-axis direction.
Illustratively, the destination packet is { Width:1920, height:1080, data:125_126#62_125# -1_ -1 … }, the display device receives the data packet, analyzes the data packet to obtain a preset resolution of 1920×1080, separates according to # to obtain coordinates of each key point, the coordinates of the nose key point are (125, 126), the coordinates of the left eye key point are (62, 125), the coordinates of the right eye key point are (-1, -1), and the coordinates are invalid key points.
It can be understood that scaling is performed on the target coordinates according to the target scale to obtain scaled coordinates of each human body key point, so that the scaled coordinates of each human body key point are located in the preset display area.
Optionally, determining the target proportion based on a preset resolution and a display resolution of the display device, specifically, the preset resolution is a×b, the display resolution is c×d, and the target proportion is the smaller value of C/a and D/B; or, the target ratio is the smallest of the C/A, D/B and the preset ratio, the preset ratio may be a fixed value with a better display effect after scaling determined according to the user image collected by the history and the preset display area, and the specific target ratio determination is not limited in the embodiments of the present application.
Illustratively, the preset resolution is 1920×1080, the display resolution is 1920×1080, and the target ratio is 1; the preset resolution is 640×360, the display resolution is 1920×720, 1920/640=3, 720/360=2, and the target ratio is 2.
In this application, the minimum unit of the coordinates of each key point is represented by a pixel, but two pixels with different physical dimensions and the same resolution are represented by different sizes.
As shown in fig. 6, the area indicated by reference numeral 60 is coordinates of each human body key point obtained by the limb detection algorithm under a preset resolution by the terminal device based on the acquired current frame user image, reference numeral 61 is a display interface of the display device, and includes an exemplary video display area indicated by reference numeral 610 and a preset display area indicated by reference numeral 611, where the preset display area displays an avatar corresponding to the scaled coordinates of each human body key point.
In the embodiment of the application, a terminal device acquires a current frame user image, determines coordinates of each human body key point in the current frame user image under a preset resolution through a limb detection algorithm to obtain a target coordinate, generates a target data packet based on the target coordinate and the preset resolution, and sends the target data packet to a display device; the display equipment receives a target data packet sent by the terminal equipment, analyzes the target data packet to obtain target coordinates and preset resolution, determines a target proportion based on the preset resolution and the display resolution of the display equipment, performs scaling processing on the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area, and draws an avatar corresponding to a current frame of user image based on the scaled coordinates of each human body key point. In the scheme, the identification process of each human body key point with higher calculation force requirement is carried out by the terminal equipment, the display equipment only needs to analyze the data packet, the coordinate is drawn after being scaled, and the calculation force requirement on the display equipment is lower. Therefore, the advantage of large-screen display of the display device and the advantage of stronger computing power of the terminal device are combined, so that a user can smoothly interact with the display device through the large-screen of the display device, and human-computer interaction experience of the user is improved.
In some embodiments, the terminal device determines a display angle of the current frame of user image based on the placement angle, the terminal device generates a target data packet based on the target coordinate, the preset resolution and the display angle, and the terminal device sends target data including the display angle to the display device; the display equipment analyzes the target data packet to obtain target coordinates, preset resolution and display angles; the display equipment corrects the preset resolution according to the display angle to obtain corrected resolution; correcting the target coordinates according to the display angle to obtain corrected coordinates of key points of each human body; determining the smaller of the first proportion and the second proportion as a target proportion; and scaling the corrected coordinates of the key points of each human body according to the target proportion to obtain scaled coordinates of the key points of each human body.
The first ratio is the ratio of the number of pixels in the length direction of the display resolution to the number of pixels in the length direction of the correction resolution, and the second ratio is the ratio of the number of pixels in the width direction of the display resolution to the number of pixels in the width direction of the correction resolution.
It can be understood that the placement angle of the terminal equipment is determined by an angle sensor in the terminal equipment, 0 degrees are preset, and the display angle of the user image of the current frame is determined according to the placement angle of the terminal equipment on the basis.
It can be understood that the position of the image collector of the terminal equipment is fixed, the placement angles of the terminal equipment are different, and the directions of the collected user images are also different.
Illustratively, as shown in fig. 7, if the preset horizontal screen 0 degrees are shown by reference numeral 701, the vertical screen 90 degrees are shown by reference numeral 702, the horizontal screen 180 degrees are shown by reference numeral 703, and the vertical screen 270 degrees are shown by reference numeral 704. Therefore, in order to display the same pose of the avatar as the actual pose of the user in the current frame user image, it is necessary to correct the target coordinates and the preset resolution according to the target angle, in order to avoid the occurrence of the discrepancy between the avatar and the actual pose of the user (as the actual pose of the user is shown in fig. 4, but the avatar drawn based on the coordinates of the recognized key points of the human body is shown in reference numeral 702 due to the problem of the placement angle of the terminal device).
In the embodiment of the application, the terminal equipment determines the display angle of the user image of the current frame based on the placement angle, generates a target data packet based on the target coordinate, the preset resolution and the display angle, and sends target data comprising the display angle to the display equipment; the display equipment analyzes the target data packet to obtain target coordinates, preset resolution and display angles; the display equipment corrects the preset resolution according to the display angle to obtain corrected resolution; correcting the target coordinates according to the display angle to obtain corrected coordinates of key points of each human body; determining the smaller of the first proportion and the second proportion as a target proportion; and scaling the corrected coordinates of the key points of each human body according to the target proportion to obtain scaled coordinates of the key points of each human body. The first ratio is the ratio of the number of pixels in the length direction of the display resolution to the number of pixels in the length direction of the correction resolution, and the second ratio is the ratio of the number of pixels in the width direction of the display resolution to the number of pixels in the width direction of the correction resolution. Therefore, the gesture of the displayed virtual image is the same as the actual gesture of the user in the user image of the current frame, and the phenomenon that the drawn virtual image is inconsistent with the actual gesture of the user due to different placement angles of the terminal equipment is avoided.
In some embodiments, the display device corrects the preset resolution according to the display angle to obtain a corrected resolution; correcting the target coordinates according to the display angle to obtain corrected coordinates of key points of each human body; the method comprises the following steps: in the case where the display angle is 0 degrees or 180 degrees, the correction resolution is kept the same as the preset resolution; under the condition that the display angle is 90 degrees or 270 degrees, the length and the width of the preset resolution are adjusted to obtain the correction resolution: under the condition that the display angle is 0 degree, keeping the correction coordinates of each human body key point to be the same as the target coordinates; under the condition that the display angle is 90 degrees, respectively rotating each human body key point indicated by the target coordinates by 90 degrees clockwise around a center point of a preset resolution to obtain correction coordinates of each human body key point; under the condition that the display angle is 180 degrees, rotating each human body key point indicated by the target coordinates by 180 degrees clockwise around a center point of a preset resolution to obtain correction coordinates of each human body key point; and under the condition that the display angle is 270 degrees, respectively rotating each human body key point indicated by the target coordinates by 270 degrees clockwise around the center point of the preset resolution to obtain the correction coordinates of each human body key point.
It can be understood that in the case that the display angle is 0 degrees, the coordinates of the corrected human body key points are the same as those before correction; under the condition that the display angle is 90 degrees, the abscissa of the corrected human body key points is: the ordinate of the human body key point before correction is subtracted from the width of the preset resolution, and the ordinate of the corrected human body key point is: human body key point abscissa before correction; under the condition that the display angle is 180 degrees, the abscissa of the corrected human body key points is: the abscissa of the human body key point before correction, and the ordinate of the corrected human body key point is: subtracting the ordinate of the human body key point before correction from the length of the preset resolution; in the case of a display angle of 270 degrees, the abscissa of the corrected human body key points is: the ordinate of the human body key point before correction is that: the length of the preset resolution minus the abscissa of the human keypoint before correction. The following description will be given by taking a target human body key point as an example, wherein the target human body key point is any human body key point, and the coordinate correction process of each human body key point is the same as the correction process of the target human body key point coordinate.
Illustratively, the preset resolution is 1920×1080, and the target human keypoint coordinates are (125, 126). When the display angle is 0 degree or 180 degrees, the correction resolution is 1920×1080; when the display angle is 90 degrees or 270 degrees, the correction resolution is 1080×1920; when the display angle is 0 degree, the corrected coordinates of the key points of the target human body are (125, 126); when the display angle is 90 degrees, the human body key points indicated by the target human body key point coordinates rotate 90 degrees clockwise around the center points (960, 540), and the abscissa of the corrected target human body key points is: 1080-126=954, and the abscissa of the corrected target human body key points is: 125, i.e. the coordinates of the corrected target human body key points are (954, 125); when the display angle is 180 degrees, the human body key points indicated by the target human body key point coordinates rotate clockwise by 180 degrees around the center points (960, 540), and the abscissa of the corrected target human body key points is: 125, the abscissa of the corrected target human body key points is: 1920-126=1794, i.e. the coordinates of the corrected target human keypoints are (125, 1794); when the display angle is 270 degrees, the human body key points indicated by the target human body key point coordinates rotate around the center points (960, 540) clockwise for 270 degrees, and the abscissa of the corrected target human body key points is: 126, the abscissa of the corrected target human body key points is: 1920-125=1795, i.e. the coordinates of the corrected target human keypoints are (126, 1795).
In the embodiment of the application, the display device corrects the preset resolution according to the display angle to obtain the corrected resolution; correcting the target coordinates according to the display angle to obtain corrected coordinates of key points of each human body; the method comprises the following steps: in the case where the display angle is 0 degrees or 180 degrees, the correction resolution is kept the same as the preset resolution; under the condition that the display angle is 90 degrees or 270 degrees, the length and the width of the preset resolution are adjusted to obtain the correction resolution: under the condition that the display angle is 0 degree, keeping the correction coordinates of each human body key point to be the same as the target coordinates; under the condition that the display angle is 90 degrees, respectively rotating each human body key point indicated by the target coordinates by 90 degrees clockwise around a center point of a preset resolution to obtain correction coordinates of each human body key point; under the condition that the display angle is 180 degrees, rotating each human body key point indicated by the target coordinates by 180 degrees clockwise around a center point of a preset resolution to obtain correction coordinates of each human body key point; and under the condition that the display angle is 270 degrees, respectively rotating each human body key point indicated by the target coordinates by 270 degrees clockwise around the center point of the preset resolution to obtain the correction coordinates of each human body key point. And correcting the preset resolution and the target coordinates so that the posture of the drawn virtual image is the same as the actual posture of the user in the user image of the current frame based on the corrected human body key point coordinates and the corrected resolution, and further improving the user experience.
In some embodiments, the terminal device determines a display angle of the current frame user image based on a placement angle of the terminal device; determining initial coordinates of each human body key point in the current frame of user image under a first resolution through a limb detection algorithm; correcting the first resolution and the initial coordinate according to the display angle to obtain a preset resolution and a target coordinate, and generating a target data packet based on the target coordinate and the preset resolution; the terminal equipment sends a target data packet to the display equipment; the display equipment receives a target data packet sent by the terminal equipment, analyzes the target data packet to obtain target coordinates and preset resolution, determines a target proportion based on the preset resolution and the display resolution of the display equipment, performs scaling processing on the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area, and draws an avatar corresponding to a current frame of user image based on the scaled coordinates of each human body key point.
It can be understood that after the initial coordinates of each human body key point under the first resolution are obtained based on the limb detection algorithm, the terminal device corrects the initial coordinates of each human body key point and the first resolution based on the target angle, so that the user gesture indicated by the target coordinates is the same as the actual gesture of the user in the current frame of user image. That is, the initial coordinates of the individual human body keys of the first resolution have been corrected on the terminal device side, and only the parsing, scaling and drawing operations have to be performed on the display device side.
In the embodiment of the application, the terminal equipment determines the display angle of the user image of the current frame based on the placement angle of the terminal equipment; determining initial coordinates of each human body key point in the current frame of user image under a first resolution through a limb detection algorithm; correcting the first resolution and the initial coordinate according to the display angle to obtain a preset resolution and a target coordinate, and generating a target data packet based on the target coordinate and the preset resolution; the terminal equipment sends a target data packet to the display equipment; the display equipment receives a target data packet sent by the terminal equipment, analyzes the target data packet to obtain target coordinates and preset resolution, determines a target proportion based on the preset resolution and the display resolution of the display equipment, performs scaling processing on the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area, and draws an avatar corresponding to a current frame of user image based on the scaled coordinates of each human body key point. Namely, the initial coordinates of each human body key of the first resolution are corrected at the terminal equipment side, and only the analysis, the scaling and the drawing operations are needed to be executed at the display equipment side, so that the calculation force requirement on the display equipment is further reduced, and the human-computer interaction process is smoother.
In some embodiments, the terminal device corrects the first resolution and the initial coordinate according to the display angle to obtain a preset resolution and a target coordinate, and specifically, in the case that the display angle is 0 degree, the preset resolution is kept the same as the first resolution, and the target coordinate is kept the same as the initial coordinate; under the condition that the display angle is 90 degrees, the length and the width of the first resolution are adjusted to obtain preset resolution, and each human body key point indicated by the initial coordinates is respectively rotated clockwise by 90 degrees around the central point of the first resolution to obtain target coordinates; under the condition that the display angle is 180 degrees, keeping the preset resolution identical to the first resolution, and rotating the initial coordinates clockwise by 180 degrees around the center point of each indicated human body key point with the first resolution respectively to obtain target coordinates; and under the condition that the display angle is 270 degrees, adjusting the length and the width of the first resolution to obtain the preset resolution, and respectively rotating each human body key point indicated by the initial coordinates around the central point of the first resolution by 270 degrees clockwise to obtain the target coordinates.
It should be noted that, the process of correcting the first resolution and the initial coordinates of each human body key point may refer to the process of correcting the preset resolution and the target coordinates on the display device side, which is not described herein.
In the embodiment of the present application, the terminal device corrects the first resolution and the initial coordinate according to the display angle to obtain the preset resolution and the target coordinate, specifically, in the case that the display angle is 0 degrees, the preset resolution is kept the same as the first resolution, and the target coordinate is kept the same as the initial coordinate; under the condition that the display angle is 90 degrees, the length and the width of the first resolution are adjusted to obtain preset resolution, and each human body key point indicated by the initial coordinates is respectively rotated clockwise by 90 degrees around the central point of the first resolution to obtain target coordinates; under the condition that the display angle is 180 degrees, keeping the preset resolution identical to the first resolution, and rotating the initial coordinates clockwise by 180 degrees around the center point of each indicated human body key point with the first resolution respectively to obtain target coordinates; and under the condition that the display angle is 270 degrees, adjusting the length and the width of the first resolution to obtain the preset resolution, and respectively rotating each human body key point indicated by the initial coordinates around the central point of the first resolution by 270 degrees clockwise to obtain the target coordinates. In this way, the terminal device executes the operation of correcting the first resolution and the initial coordinates of each human body key point, and the data processing pressure of the display device is further reduced while ensuring that the user gesture indicated by the target coordinates is the same as the actual gesture of the user in the current frame of user image, so that the human-computer interaction process is smoother.
In some embodiments, the terminal device determines a display angle of the current frame user image based on a placement angle of the terminal device; specifically, in the case where the placement angle is in the first range or the second range, the display angle is determined to be 0 degrees; in the case where the placement angle is in the third range, determining that the display angle is 90 degrees; in the case where the placement angle is in the fourth range, determining that the display angle is 180 degrees; in the case where the placement angle is in the fifth range, the display angle is determined to be 270 degrees.
Wherein, any value of the first range is less than the minimum value of the third range, any value of the third range is less than the minimum value of the fourth range, any value of the fourth range is less than the minimum value of the fifth range, and any value of the fifth range is less than the minimum value of the second range.
It can be understood that the placement angle of the terminal device is determined by the angle sensor in the terminal device, but the angle measured by the sensor is finer and is not consistent with the actual application, such as: the sensor measures the placement angle 89 degrees, which may actually be because the plane on which the terminal device is placed is not flat enough, in order to avoid such a situation, the display angle is frequently adjusted according to the placement angle, and then the preset resolution and the target coordinates are corrected according to the display angle.
Illustratively, the first range is: greater than or equal to 0 degrees and less than 45 degrees; the second range is: 315 degrees or more and 360 degrees or less; the third range is: greater than or equal to 45 degrees and less than 135 degrees; the fourth range is: greater than or equal to 135 degrees and less than 225 degrees; the fifth range is: greater than or equal to 225 degrees and less than 315 degrees.
It can be understood that, in order to ensure that the accuracy of the obtained display angle is higher, the placement angle of the terminal device may be obtained according to a preset period, and then the display angle corresponding to the target range with the largest number of placement angles falling within the target range in one period is determined as the final display angle.
For example, the preset period is 10 times, the placement angles of the terminal device are continuously obtained for 10 times, each range corresponds to one counter, and the placement angles obtained for 10 times are respectively: 92 degrees, 90 degrees, 91 degrees, 0 degrees, 91 degrees, 90 degrees, 92 degrees, 90 degrees; the count of the corresponding counter of the third range is: 9, the count of the counter in the first range is 1, and the counts of the counters in other ranges are 0, so that the placement angle of the terminal equipment is determined to be in the third range at the moment, and the corresponding display angle is 90 degrees. After one period is finished, the counter is cleared, and the execution is repeated.
In the embodiment of the application, the terminal equipment determines the display angle of the user image of the current frame based on the placement angle of the terminal equipment; specifically, in the case where the placement angle is in the first range or the second range, the display angle is determined to be 0 degrees; in the case where the placement angle is in the third range, determining that the display angle is 90 degrees; in the case where the placement angle is in the fourth range, determining that the display angle is 180 degrees; in the case where the placement angle is in the fifth range, the display angle is determined to be 270 degrees. Therefore, the preset resolution and the target coordinates are prevented from being frequently corrected according to the display angle, the calculation force in the correction process is saved, and the experience of a user in the process of human-computer interaction is further improved.
In the embodiment of the present application, an image processing method is provided, as shown in fig. 8, and specifically includes the following steps 401 to 409.
401. The terminal equipment collects the user image of the current frame.
402. The terminal equipment determines the coordinates of each human body key point in the current frame user image under the preset resolution through a limb detection algorithm to obtain target coordinates.
403. And the terminal equipment generates a target data packet based on the target coordinates and the preset resolution.
404. And the terminal equipment sends the target data packet to the display equipment.
405. And the display equipment receives the target data packet sent by the terminal equipment.
406. The display device analyzes the target data packet to obtain target coordinates and preset resolution.
407. The display device determines the target ratio based on the preset resolution and the display resolution of the display device.
408. And the display equipment performs scaling processing on the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point.
The zoom coordinates of each human body key point are located in a preset display area.
409. The display device draws an avatar corresponding to the current frame user image based on the scaled coordinates of each human body key point.
In the embodiment of the application, a terminal device acquires a current frame user image, determines coordinates of each human body key point in the current frame user image under a preset resolution through a limb detection algorithm to obtain a target coordinate, generates a target data packet based on the target coordinate and the preset resolution, and sends the target data packet to a display device; the display equipment receives a target data packet sent by the terminal equipment, analyzes the target data packet to obtain target coordinates and preset resolution, determines a target proportion based on the preset resolution and the display resolution of the display equipment, performs scaling processing on the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are positioned in a preset display area, and draws an avatar corresponding to a current frame of user image based on the scaled coordinates of each human body key point. In the scheme, the identification process of each human body key point with higher calculation force requirement is carried out by the terminal equipment, the display equipment only needs to analyze the data packet, the coordinate is drawn after being scaled, and the calculation force requirement on the display equipment is lower. Therefore, the advantage of large-screen display of the display device and the advantage of stronger computing power of the terminal device are combined, so that a user can smoothly interact with the display device through the large-screen of the display device, and human-computer interaction experience of the user is improved.
In some embodiments, the target data packet further comprises: display angle: referring to fig. 8, as shown in fig. 9, step 403 may be specifically implemented by step 403a, step 406 may be specifically implemented by step 406a, step 407 may be specifically implemented by steps 407a and 407b, step 409 may be specifically implemented by step 409a, and before step 402, the image processing method provided in the embodiment of the present application further includes step 410, and before step 408, the image processing method provided in the embodiment of the present application further includes step 411.
410. The terminal equipment determines the display angle of the user image of the current frame based on the placement angle of the terminal equipment.
403a, the terminal device generates target data based on the target coordinates, the preset resolution and the display angle.
406a, the display device analyzes the target data packet to obtain target coordinates, preset resolution and display angles.
407a, the display device corrects the preset resolution according to the display angle to obtain a corrected resolution.
407b, the display device determines the smaller of the first scale and the second scale as the target scale.
The first ratio is the ratio of the number of pixels in the length direction of the display resolution to the number of pixels in the length direction of the correction resolution, and the second ratio is the ratio of the number of pixels in the width direction of the display resolution to the number of pixels in the width direction of the correction resolution.
411. And the display equipment corrects the target coordinates according to the display angle to obtain corrected coordinates of each human body key point.
409a, the display device performs scaling processing on the corrected coordinates of the key points of each human body according to the target proportion, so as to obtain scaled coordinates of the key points of each human body.
In the embodiment of the application, the terminal equipment determines the display angle of the user image of the current frame based on the placement angle of the terminal equipment; the terminal equipment generates target data based on target coordinates, preset resolution and display angles; the display equipment analyzes the target data packet to obtain target coordinates, preset resolution and display angles; the display equipment corrects the preset resolution according to the display angle to obtain corrected resolution; the display device determines the smaller of the first proportion and the second proportion as a target proportion; the first ratio is the ratio of the number of the pixels of the display resolution in the length direction to the number of the pixels of the correction resolution in the length direction, and the second ratio is the ratio of the number of the pixels of the display resolution in the width direction to the number of the pixels of the correction resolution in the width direction; the display equipment corrects the target coordinates according to the display angle to obtain corrected coordinates of key points of each human body; the display equipment performs scaling processing on the correction coordinates of the key points of each human body according to the target proportion to obtain scaled coordinates of the key points of each human body. Therefore, the gesture of the displayed virtual image is the same as the actual gesture of the user in the user image of the current frame, and the phenomenon that the drawn virtual image is inconsistent with the actual gesture of the user due to different placement angles of the terminal equipment is avoided.
In some embodiments, as shown in fig. 10 in conjunction with fig. 9, the above step 407a may be specifically implemented by the following step 407b and step 407c, and the above step 411 may be specifically implemented by the following steps 411a to 411 d.
407b, the display device keeps the correction resolution the same as the preset resolution in case the display angle is 0 degrees or 180 degrees.
407c, under the condition that the display angle is 90 degrees or 270 degrees, the display device adjusts the length and the width of the preset resolution to obtain the correction resolution.
411a, in the case where the display angle is 0 degrees, the display device keeps the corrected coordinates of the respective human body key points the same as the target coordinates.
411b, under the condition that the display angle is 90 degrees, the display device rotates each human body key point indicated by the target coordinates by 90 degrees clockwise around the center point of the preset resolution respectively, and correction coordinates of each human body key point are obtained.
411c, under the condition that the display angle is 180 degrees, the display equipment rotates each human body key point indicated by the target coordinates by 180 degrees clockwise around the center point of the preset resolution respectively, and correction coordinates of each human body key point are obtained.
411d, under the condition that the display angle is 270 degrees, the display equipment rotates each human body key point indicated by the target coordinates clockwise by 270 degrees around the center point of the preset resolution respectively, and correction coordinates of each human body key point are obtained.
In the embodiment of the application, under the condition that the display angle is 0 degree or 180 degrees, the display equipment keeps the correction resolution the same as the preset resolution; under the condition that the display angle is 90 degrees or 270 degrees, the display equipment adjusts the length and the width of the preset resolution to obtain the correction resolution; under the condition that the display angle is 0 degree, the display equipment keeps the correction coordinates of each human body key point to be the same as the target coordinates; under the condition that the display angle is 90 degrees, the display equipment respectively rotates each human body key point indicated by the target coordinates by 90 degrees clockwise around a center point of a preset resolution to obtain correction coordinates of each human body key point; under the condition that the display angle is 180 degrees, the display equipment respectively rotates each human body key point indicated by the target coordinates by 180 degrees clockwise around a center point of a preset resolution to obtain correction coordinates of each human body key point; and under the condition that the display angle is 270 degrees, the display equipment respectively rotates each human body key point indicated by the target coordinates by 270 degrees clockwise around the center point of the preset resolution to obtain the correction coordinates of each human body key point. And correcting the preset resolution and the target coordinates so that the posture of the drawn virtual image is the same as the actual posture of the user in the user image of the current frame based on the corrected human body key point coordinates and the corrected resolution, and further improving the user experience.
In some embodiments, as shown in fig. 11 in conjunction with fig. 8, before the step 402, the image processing method provided in the embodiments of the present application further includes a step 410, where the step 402 may be specifically implemented by the steps 402a and 402b described below.
410. The terminal equipment determines the display angle of the user image of the current frame based on the placement angle of the terminal equipment.
402a, the terminal equipment determines initial coordinates of each human body key point in the current frame of user image under the first resolution through a limb detection algorithm.
402b, the terminal equipment corrects the first resolution and the initial coordinates according to the display angle to obtain the preset resolution and the target coordinates.
In the embodiment of the application, the terminal equipment determines the display angle of the user image of the current frame based on the placement angle of the terminal equipment; the terminal equipment determines initial coordinates of each human body key point in the current frame of user image under the first resolution through a limb detection algorithm; and the terminal equipment corrects the first resolution and the initial coordinate according to the display angle to obtain the preset resolution and the target coordinate. Namely, the initial coordinates of each human body key of the first resolution are corrected at the terminal equipment side, and only the analysis, the scaling and the drawing operations are needed to be executed at the display equipment side, so that the calculation force requirement on the display equipment is further reduced, and the human-computer interaction process is smoother.
In some embodiments, as shown in fig. 12 in conjunction with fig. 11, the above step 402b may be specifically implemented by the following steps 402c and 402 f.
402c, under the condition that the display angle is 0 degree, the terminal equipment keeps the preset resolution identical to the first resolution, and keeps the target coordinate identical to the initial coordinate.
402d, under the condition that the display angle is 90 degrees, the terminal equipment adjusts the length and the width of the first resolution to obtain the preset resolution, and each human body key point indicated by the initial coordinate is respectively rotated by 90 degrees clockwise around the central point of the first resolution to obtain the target coordinate.
402e, under the condition that the display angle is 180 degrees, the terminal equipment keeps the preset resolution identical to the first resolution, and the initial coordinates are rotated clockwise by 180 degrees around the center points of the first resolution of each indicated human body key point respectively, so that the target coordinates are obtained.
402f, under the condition that the display angle is 270 degrees, the terminal equipment adjusts the length and the width of the first resolution to obtain the preset resolution, and each human body key point indicated by the initial coordinate is respectively rotated around the central point of the first resolution by 270 degrees clockwise to obtain the target coordinate.
In the embodiment of the present application, under the condition that the display angle is 0 degrees, the terminal device keeps the preset resolution identical to the first resolution, and keeps the target coordinate identical to the initial coordinate; under the condition that the display angle is 90 degrees, the terminal equipment adjusts the length and the width of the first resolution to obtain preset resolution, and each human body key point indicated by the initial coordinate is respectively rotated clockwise by 90 degrees around the central point of the first resolution to obtain the target coordinate; under the condition that the display angle is 180 degrees, the terminal equipment keeps the preset resolution identical to the first resolution, and the initial coordinates are rotated clockwise by 180 degrees around the center points of the first resolution of each indicated human body key point respectively to obtain target coordinates; and under the condition that the display angle is 270 degrees, the terminal equipment adjusts the length and the width of the first resolution to obtain the preset resolution, and each human body key point indicated by the initial coordinate is respectively rotated around the central point of the first resolution by 270 degrees clockwise to obtain the target coordinate. In this way, the terminal device executes the operation of correcting the first resolution and the initial coordinates of each human body key point, and the data processing pressure of the display device is further reduced while ensuring that the user gesture indicated by the target coordinates is the same as the actual gesture of the user in the current frame of user image, so that the human-computer interaction process is smoother.
In some embodiments, as shown in fig. 13 in conjunction with fig. 11, the above step 410 may be implemented specifically by the following steps 410a and 410 d.
410a, in case the placement angle is in the first range or the second range, the terminal device determines that the display angle is 0 degrees.
410b, in case the placement angle is in the third range, the terminal device determines that the display angle is 90 degrees.
410c, in case the placement angle is in the fourth range, the terminal device determines that the display angle is 180 degrees.
410d, in case the placement angle is in the fifth range, the terminal device determines that the display angle is 270 degrees.
Wherein, any value of the first range is less than the minimum value of the third range, any value of the third range is less than the minimum value of the fourth range, any value of the fourth range is less than the minimum value of the fifth range, and any value of the fifth range is less than the minimum value of the second range.
In the embodiment of the application, under the condition that the placement angle is in the first range or the second range, the display angle is determined to be 0 degree; in the case where the placement angle is in the third range, determining that the display angle is 90 degrees; in the case where the placement angle is in the fourth range, determining that the display angle is 180 degrees; in the case where the placement angle is in the fifth range, determining that the display angle is 270 degrees; wherein, any value of the first range is less than the minimum value of the third range, any value of the third range is less than the minimum value of the fourth range, any value of the fourth range is less than the minimum value of the fifth range, and any value of the fifth range is less than the minimum value of the second range. Therefore, the preset resolution and the target coordinates are prevented from being frequently corrected according to the display angle, the calculation force in the correction process is saved, and the experience of a user in the process of human-computer interaction is further improved.
It should be noted that, in the foregoing method embodiment, the same process as that in the device embodiment may refer to the related description of the device embodiment, which is not repeated herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process executed by the image processing method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or the like.
The present invention provides a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the image processing method described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, the display device comprising:
a communicator configured to: receiving a target data packet sent by a terminal device, wherein the target data packet is generated by the terminal device based on target coordinates and preset resolution, the target coordinates comprise coordinates of each human body key point in a current frame user image determined by the terminal device through a limb detection algorithm under the preset resolution, and the current frame user image is acquired by the terminal device;
a controller configured to: analyzing the target data packet to obtain the target coordinates and the preset resolution;
Determining a target proportion based on the preset resolution and the display resolution of the display device;
scaling the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are all located in a preset display area;
and drawing an avatar corresponding to the current frame user image based on the scaling coordinates of each human body key point.
2. The display device of claim 1, wherein the target data packet further comprises: display angle: the controller is specifically configured to:
analyzing the target data packet to obtain the target coordinates, the preset resolution and the display angle;
correcting the preset resolution according to the display angle to obtain corrected resolution;
determining the smaller of a first proportion and a second proportion as the target proportion, wherein the first proportion is the ratio of the number of pixels of the display resolution in the length direction to the number of pixels of the correction resolution in the length direction, and the second proportion is the ratio of the number of pixels of the display resolution in the width direction to the number of pixels of the correction resolution in the width direction;
The controller is further configured to: before the target coordinates are scaled according to the target proportion to obtain scaled coordinates of each human body key point, correcting the target coordinates according to the display angle to obtain corrected coordinates of each human body key point;
the controller is specifically configured to: and scaling the corrected coordinates of the key points of the human body according to the target proportion to obtain scaled coordinates of the key points of the human body.
3. The display device of claim 2, wherein the controller is specifically configured to:
maintaining the correction resolution to be the same as the preset resolution in the case where the display angle is 0 degrees or 180 degrees;
under the condition that the display angle is 90 degrees or 270 degrees, adjusting the length and the width of the preset resolution to obtain the correction resolution;
under the condition that the display angle is 0 degree, keeping the correction coordinates of the key points of the human body to be the same as the target coordinates;
under the condition that the display angle is 90 degrees, rotating each human body key point indicated by the target coordinates by 90 degrees clockwise around the center point of the preset resolution respectively to obtain correction coordinates of each human body key point;
Under the condition that the display angle is 180 degrees, rotating each human body key point indicated by the target coordinates by 180 degrees clockwise around the center point of the preset resolution respectively to obtain correction coordinates of each human body key point;
and under the condition that the display angle is 270 degrees, rotating each human body key point indicated by the target coordinates by 270 degrees clockwise around the center point of the preset resolution respectively to obtain the correction coordinates of each human body key point.
4. A terminal device, comprising:
an image collector configured to: collecting a user image of a current frame;
a controller configured to: determining coordinates of each human body key point in the current frame user image under a preset resolution through a limb detection algorithm to obtain target coordinates;
generating a target data packet based on the target coordinates and the preset resolution;
a communicator configured to: and sending the target data packet to a display device, so that the display device determines scaling coordinates of each human body key point for drawing the virtual image corresponding to the current frame user image based on the target coordinates and the preset resolution obtained by analyzing the target data packet, wherein the scaling coordinates of each human body key point are obtained by scaling the target coordinates according to a target proportion, the scaling coordinates of each human body key point are all located in a preset display area of the display device, and the target proportion is obtained based on the preset resolution and the display resolution of the display device.
5. The terminal device of claim 4, wherein the controller is further configured to determine, based on a placement angle of the terminal device, a display angle of the current frame user image before the coordinates of each human body key point in the current frame user image at a preset resolution are determined by the limb detection algorithm to obtain target coordinates;
the controller is specifically configured to: generating a target data packet based on the target coordinates, the preset resolution and the display angle, so that the display device determines scaling coordinates of each human body key point for drawing an avatar corresponding to the current frame user image based on the target coordinates, the preset resolution and the display angle obtained by analyzing the target data packet, wherein the scaling coordinates of each human body key point are obtained by scaling the correction coordinates of each human body key point according to the target coordinates, the correction coordinates of each human body key point are obtained by correcting the target coordinates according to the display angle, the target proportion is smaller of a first proportion and a second proportion, the first proportion is a ratio of the number of pixels of the display resolution in the length direction to the number of pixels of the correction resolution in the length direction, the second proportion is a ratio of the number of pixels of the display resolution in the width direction to the number of pixels of the correction resolution in the width direction, and the correction resolution is obtained by correcting the display angle according to the preset resolution.
6. The terminal device of claim 4, wherein the controller is further configured to determine, based on a placement angle of the terminal device, a display angle of the current frame user image before the coordinates of each human body key point in the current frame user image at a preset resolution are determined by the limb detection algorithm to obtain target coordinates;
the controller is specifically configured to: determining initial coordinates of each human body key point in the current frame user image under a first resolution through a limb detection algorithm;
and correcting the first resolution and the initial coordinate according to the display angle to obtain the preset resolution and the target coordinate.
7. The terminal device of claim 6, wherein the controller is specifically configured to:
under the condition that the display angle is 0 degree, keeping the preset resolution identical to the first resolution, and keeping the target coordinate identical to the initial coordinate;
under the condition that the display angle is 90 degrees, the length and the width of the first resolution are adjusted to obtain the preset resolution, and each human body key point indicated by the initial coordinate is respectively rotated clockwise by 90 degrees around the central point of the first resolution to obtain the target coordinate;
Under the condition that the display angle is 180 degrees, keeping the preset resolution and the first resolution to be the same, and rotating the initial coordinates clockwise by 180 degrees around the central point of the first resolution of each indicated human body key point to obtain the target coordinates;
and under the condition that the display angle is 270 degrees, adjusting the length and the width of the first resolution to obtain the preset resolution, and respectively rotating each human body key point indicated by the initial coordinate around the central point of the first resolution by 270 degrees clockwise to obtain the target coordinate.
8. Terminal device according to claim 5 or 6, characterized in that the controller is specifically configured to:
determining that the display angle is 0 degrees in the case that the placement angle is in a first range or a second range;
determining that the display angle is 90 degrees in the case that the placement angle is in a third range;
determining that the display angle is 180 degrees in the case that the placement angle is in a fourth range;
determining that the display angle is 270 degrees in the case that the placement angle is in a fifth range;
wherein any value of the first range is less than the minimum value of the third range, any value of the third range is less than the minimum value of the fourth range, any value of the fourth range is less than the minimum value of the fifth range, and any value of the fifth range is less than the minimum value of the second range.
9. An image processing method, characterized by being applied to a display device, the method comprising:
receiving a target data packet sent by a terminal device, wherein the target data packet is generated by the terminal device based on target coordinates and preset resolution, the target coordinates comprise coordinates of each human body key point in a current frame user image determined by the terminal device through a limb detection algorithm under the preset resolution, and the current frame user image is acquired by the terminal device;
analyzing the target data packet to obtain the target coordinates and the preset resolution;
determining a target proportion based on the preset resolution and the display resolution of the display device;
scaling the target coordinates according to the target proportion to obtain scaled coordinates of each human body key point, wherein the scaled coordinates of each human body key point are all located in a preset display area;
and drawing an avatar corresponding to the current frame user image based on the scaling coordinates of each human body key point.
10. An image processing method, applied to a terminal device, comprising:
collecting a user image of a current frame;
Determining coordinates of each human body key point in the current frame user image under a preset resolution through a limb detection algorithm to obtain target coordinates;
generating a target data packet based on the target coordinates and the preset resolution;
and sending the target data packet to a display device, so that the display device determines scaling coordinates of each human body key point for drawing the virtual image corresponding to the current frame user image based on the target coordinates and the preset resolution obtained by analyzing the target data packet, wherein the scaling coordinates of each human body key point are obtained by scaling the target coordinates according to a target proportion, the scaling coordinates of each human body key point are all located in a preset display area of the display device, and the target proportion is obtained based on the preset resolution and the display resolution of the display device.
CN202310507948.5A 2023-05-08 2023-05-08 Display device, terminal device and image processing method Pending CN117651170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310507948.5A CN117651170A (en) 2023-05-08 2023-05-08 Display device, terminal device and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310507948.5A CN117651170A (en) 2023-05-08 2023-05-08 Display device, terminal device and image processing method

Publications (1)

Publication Number Publication Date
CN117651170A true CN117651170A (en) 2024-03-05

Family

ID=90048289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310507948.5A Pending CN117651170A (en) 2023-05-08 2023-05-08 Display device, terminal device and image processing method

Country Status (1)

Country Link
CN (1) CN117651170A (en)

Similar Documents

Publication Publication Date Title
US20120236180A1 (en) Image adjustment method and electronics system using the same
WO2022048424A1 (en) Screen picture adaptive adjustment method, apparatus and device, and storage medium
WO2022100262A1 (en) Display device, human body posture detection method, and application
CN114237419B (en) Display device and touch event identification method
WO2022222510A1 (en) Interaction control method, terminal device, and storage medium
CN116472715A (en) Display device and camera tracking method
CN110941378B (en) Video content display method and electronic equipment
CN117918057A (en) Display device and device control method
WO2024055531A1 (en) Illuminometer value identification method, electronic device, and storage medium
CN112969088A (en) Screen projection control method and device, electronic equipment and readable storage medium
US11984097B2 (en) Display apparatus having a whiteboard application with multi-layer superimposition and display method thereof
US11756302B1 (en) Managing presentation of subject-based segmented video feed on a receiving device
CN104914985A (en) Gesture control method and system and video flowing processing device
CN117651170A (en) Display device, terminal device and image processing method
CN116801027A (en) Display device and screen projection method
WO2021218473A1 (en) Display method and display device
JP2019087136A (en) Screen display control method and screen display control system
US10459533B2 (en) Information processing method and electronic device
CN108540726B (en) Method and device for processing continuous shooting image, storage medium and terminal
US12028645B2 (en) Subject-based smart segmentation of video feed on a transmitting device
CN111679737B (en) Hand segmentation method and electronic device
CN115564875A (en) Image processing method and device
US20230388447A1 (en) Subject-based smart segmentation of video feed on a transmitting device
US11825237B1 (en) Segmented video preview controls by remote participants in a video communication session
US12019943B2 (en) Function based selective segmented video feed from a transmitting device to different participants on a video communication session

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination