CN112486691A - Control method and system of display device and computer readable storage medium - Google Patents

Control method and system of display device and computer readable storage medium Download PDF

Info

Publication number
CN112486691A
CN112486691A CN202011500919.9A CN202011500919A CN112486691A CN 112486691 A CN112486691 A CN 112486691A CN 202011500919 A CN202011500919 A CN 202011500919A CN 112486691 A CN112486691 A CN 112486691A
Authority
CN
China
Prior art keywords
display device
mobile terminal
parameters
parameter
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011500919.9A
Other languages
Chinese (zh)
Inventor
薛凯文
赖长明
徐永泽
孙志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202011500919.9A priority Critical patent/CN112486691A/en
Publication of CN112486691A publication Critical patent/CN112486691A/en
Priority to PCT/CN2021/132038 priority patent/WO2022127522A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a control method and a system of display equipment and a computer readable storage medium, wherein the control method of the display equipment comprises the following steps: receiving an instruction sent by display equipment, and analyzing the instruction to obtain an input parameter; determining an output parameter corresponding to the input parameter through a neural network model; and sending the output parameters to the display equipment so that the display equipment executes the operation corresponding to the output parameters. The invention can solve the problems of low calculation efficiency and slow response speed when the display equipment obtains the data result by utilizing the neural network model.

Description

Control method and system of display device and computer readable storage medium
Technical Field
The present invention relates to the field of display device technologies, and in particular, to a method and a system for controlling a display device, and a computer-readable storage medium.
Background
With the development of science and technology, under the condition of adding image identification and AI technology, the functions of the display device are more and more rich and various, for example, an AI (Artificial intelligence) smart television can order hotels and air tickets, and can help people to place orders and purchase favorite unknown goods in television dramas through AI image identification. A deep neural network is behind the AI image recognition technology, and the neural network trained by the computing platform can provide accurate recognition results through a large amount of data. However, when the data result is obtained through the neural network model on the display device, the display device needs to occupy more computing resources, and the computing resources and the computing capability of the display device are limited, so that the display device has low computing efficiency and slow response speed when the data result is obtained through the neural network model.
Disclosure of Invention
The invention mainly aims to provide a control method and a control system of a display device and a computer readable storage medium, which solve the problems of low calculation efficiency and slow response speed when the display device obtains a data result by utilizing a neural network model.
In order to achieve the above object, the present invention provides a control method of a display device, the control method of the display device being applied to a mobile terminal, the control method of the display device comprising:
receiving an instruction sent by display equipment, and analyzing the instruction to obtain an input parameter;
determining an output parameter corresponding to the input parameter through a neural network model;
and sending the output parameters to the display equipment so that the display equipment executes the operation corresponding to the output parameters.
Optionally, the instruction includes the input parameter and an identity of a target application of the display device, and the step of determining, by using a neural network model, an output parameter corresponding to the input parameter includes:
and inputting the input parameters into the neural network model corresponding to the identity identification to obtain output parameters corresponding to the input parameters, wherein the identity identification and the input parameters are obtained by analyzing the instruction.
Optionally, before the step of determining the output parameter corresponding to the input parameter through the neural network model, the method further includes:
receiving an initial model sent by a server;
acquiring user use information of one or more intelligent devices;
and training the initial model according to the user use information to obtain the neural network model.
Optionally, the step of training the initial model according to the user usage information to obtain the neural network model includes:
determining a gradient according to the user use information, and sending the gradient to a server so that the server determines an accumulated gradient according to the gradient sent by each mobile terminal;
receiving the accumulated gradient fed back by the server, and setting the training parameters of the initial model according to the accumulated gradient to obtain a training model;
and training the training model according to the user use information to obtain the neural network model.
In order to achieve the above object, the present invention also provides a control method of a display apparatus, the control method of the display apparatus being applied to the display apparatus, the control method of the display apparatus including:
acquiring an input parameter;
generating an instruction according to the input parameter, and sending the instruction to a mobile terminal so that the mobile terminal can obtain the input parameter according to the instruction, and obtain an output parameter corresponding to the input parameter through the input parameter and a neural network model;
and receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters.
Optionally, the step of acquiring the input parameters includes:
acquiring initial parameters to be operated and idle computing resources of the display equipment;
determining the input parameter in the initial parameter according to the computing resource, wherein the computing resource is inversely proportional to the computing resource occupied by the display device for computing the input parameter.
Optionally, the executing the operation corresponding to the output parameter includes:
operating other parameters to obtain target parameters, wherein the other parameters and the input parameters form the initial parameters;
and determining the operation to be executed by the display equipment according to the target parameter and the output parameter, and executing the operation.
Optionally, the step of generating an instruction according to the input parameter includes:
determining a currently running application program;
and determining an identity corresponding to the application program, and generating an instruction according to the identity and the input parameters.
Optionally, the step of receiving the output parameter fed back by the mobile terminal and executing an operation corresponding to the output parameter includes:
and determining the operation to be executed by the display equipment according to the output parameters fed back by each mobile terminal, and executing the operation.
In order to achieve the above object, the present invention further provides a control method of a display device, the control method of a display device being applied to a server, the control method of a display device including:
sending an initial model to each mobile terminal;
receiving gradients sent by each mobile terminal;
and determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal, wherein the mobile terminal obtains a neural network model based on the accumulated gradient and the initial model, and determines an output parameter corresponding to an input parameter sent by a display device based on the neural network model, so that the display device executes an operation corresponding to the output parameter.
Optionally, the step of sending the initial model to each mobile terminal includes:
acquiring a preset model, and performing format conversion on the preset model to obtain an initial model;
and sending the initial model to each mobile terminal.
In addition, to achieve the above object, the present invention further provides a mobile terminal, which includes a memory, a processor, and a control program of a display device stored on the memory and operable on the processor, wherein the control program of the display device, when executed by the processor, implements the steps of the control method of the display device as described above.
Furthermore, to achieve the above object, the present invention also proposes a display device comprising a memory, a processor, and a control program of the display device stored on the memory and executable on the processor, the control program of the display device implementing the steps of the control method of the display device as described above when executed by the processor.
In addition, in order to achieve the above object, the present invention further proposes a server including a memory, a processor, and a control program of a display device stored on the memory and operable on the processor, the control program of the display device implementing the steps of the control method of the display device as described above when executed by the processor.
Further, to achieve the above object, the present invention also proposes a control system of a display apparatus, comprising:
the display equipment is used for acquiring input parameters; generating an instruction according to the input parameters, and sending the instruction to the mobile terminal; receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters;
the mobile terminal is used for receiving the instruction sent by the display equipment and analyzing the instruction to obtain an input parameter; determining an output parameter corresponding to the input parameter through a neural network model; sending the output parameters to the display device;
the server is used for sending the initial model to each mobile terminal; receiving gradients sent by each mobile terminal; and determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal so that the mobile terminal can obtain a neural network model according to the accumulated gradient and the initial model.
Further, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon a control program of a display apparatus, which when executed by a processor, implements the steps of the control method of the display apparatus as described above.
The invention provides a control method and a control system of a display device and a computer readable storage medium.A mobile terminal receives an instruction sent by the display device, analyzes the instruction to obtain an input parameter, determines an output parameter corresponding to the input parameter through a neural network model, and sends the output parameter to the display device so that the display device executes an operation corresponding to the output parameter. In the scheme, the display equipment sends the input parameters to the mobile terminal in the form of instructions, the output parameters corresponding to the input parameters are determined through a neural network model in the mobile terminal, the display equipment does not execute the determination process of the output parameters, and the display equipment only executes corresponding operations according to the output parameters returned by the mobile terminal, so that the computing resources of the display equipment are not excessively occupied due to the determination of the output parameters.
Drawings
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a hardware architecture of a mobile terminal/display device/server according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a method for controlling a display device according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of a method for controlling a display device according to the present invention;
FIG. 4 is a flowchart illustrating a third embodiment of a method for controlling a display device according to the present invention;
FIG. 5 is a flowchart illustrating a fourth embodiment of a method for controlling a display device according to the present invention;
FIG. 6 is a flowchart illustrating a fifth embodiment of a method for controlling a display device according to the present invention;
FIG. 7 is a flowchart illustrating a sixth exemplary embodiment of a method for controlling a display device according to the present invention;
FIG. 8 is a flowchart illustrating a seventh exemplary embodiment of a method for controlling a display device according to the present invention;
fig. 9 is a schematic flowchart of an eighth embodiment of a control method for a display device according to the present invention;
fig. 10 is a first schematic diagram of interaction between a mobile terminal and a display device according to an embodiment of the present invention;
fig. 11 is a schematic interaction diagram ii of a mobile terminal and a display device according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a control system of a display device according to an embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: receiving an instruction sent by display equipment, analyzing the instruction to obtain an input parameter, determining an output parameter corresponding to the input parameter through a neural network model, and sending the output parameter to the display equipment so that the display equipment executes an operation corresponding to the output parameter. In the scheme, the display equipment sends the input parameters to the mobile terminal in the form of instructions, the output parameters corresponding to the input parameters are determined through a neural network model in the mobile terminal, the display equipment does not execute the determination process of the output parameters, and the display equipment only executes corresponding operations according to the output parameters returned by the mobile terminal, so that the computing resources of the display equipment are not excessively occupied due to the determination of the output parameters.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
As shown in fig. 1, the hardware architecture diagram shown in fig. 1 is applied to the mobile terminal according to the embodiment.
As shown in fig. 1, the mobile terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The network interface 1004 may optionally include a standard wired interface, a wireless interface (such as a non-volatile memory), such as a disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the mobile terminal shown in fig. 1 is not intended to be limiting of the mobile terminal and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein a control program of an operating system and a display device.
In the mobile terminal shown in fig. 1, the processor 1001 may be configured to call a control program of the display device stored in the memory 1005, and perform the following operations:
receiving an instruction sent by display equipment, and analyzing the instruction to obtain an input parameter;
determining an output parameter corresponding to the input parameter through a neural network model;
and sending the output parameters to the display equipment so that the display equipment executes the operation corresponding to the output parameters.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
and inputting the input parameters into the neural network model corresponding to the identity identification to obtain output parameters corresponding to the input parameters, wherein the identity identification and the input parameters are obtained by analyzing the instruction.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
receiving an initial model sent by a server;
acquiring user use information of one or more intelligent devices;
and training the initial model according to the user use information to obtain the neural network model.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
determining a gradient according to the user use information, and sending the gradient to a server so that the server determines an accumulated gradient according to the gradient sent by each mobile terminal;
receiving the accumulated gradient fed back by the server, and setting the training parameters of the initial model according to the accumulated gradient to obtain a training model;
and training the training model according to the user use information to obtain the neural network model.
As shown in fig. 1, the hardware architecture diagram shown in fig. 1 can also be applied to the display device according to the embodiment.
As shown in fig. 1, the display apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The network interface 1004 may optionally include a standard wired interface, a wireless interface (such as a non-volatile memory), such as a disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the display device shown in fig. 1 does not constitute a limitation of the display device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein a control program of an operating system and a display device.
In the display device shown in fig. 1, the processor 1001 may be configured to call a control program of the display device stored in the memory 1005, and perform the following operations:
acquiring an input parameter;
generating an instruction according to the input parameter, and sending the instruction to a mobile terminal so that the mobile terminal can obtain the input parameter according to the instruction, and obtain an output parameter corresponding to the input parameter through the input parameter and a neural network model;
and receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
acquiring initial parameters to be operated and idle computing resources of the display equipment;
determining the input parameter in the initial parameter according to the computing resource, wherein the computing resource is inversely proportional to the computing resource occupied by the display device for computing the input parameter.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
operating other parameters to obtain target parameters, wherein the other parameters and the input parameters form the initial parameters;
and determining the operation to be executed by the display equipment according to the target parameter and the output parameter, and executing the operation.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
determining a currently running application program;
and determining an identity corresponding to the application program, and generating an instruction according to the identity and the input parameters.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
and determining the operation to be executed by the display equipment according to the output parameters fed back by each mobile terminal, and executing the operation.
As shown in fig. 1, the hardware architecture diagram shown in fig. 1 can also be applied to the server according to the embodiment.
As shown in fig. 1, the server may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The network interface 1004 may optionally include a standard wired interface, a wireless interface (such as a non-volatile memory), such as a disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the server shown in FIG. 1 is not intended to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein a control program of an operating system and a display device.
In the server shown in fig. 1, the processor 1001 may be configured to call a control program of the display device stored in the memory 1005, and perform the following operations:
sending an initial model to each mobile terminal;
receiving gradients sent by each mobile terminal;
and determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal, wherein the mobile terminal obtains a neural network model based on the accumulated gradient and the initial model, and determines an output parameter corresponding to an input parameter sent by a display device based on the neural network model, so that the display device executes an operation corresponding to the output parameter.
Further, the processor 1001 may call a control program of the display device stored in the memory 1005, and also perform the following operations:
acquiring a preset model, and performing format conversion on the preset model to obtain an initial model;
and sending the initial model to each mobile terminal.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a method for controlling a display device according to the present invention, where the method for controlling a display device is applied to a mobile terminal, and the method for controlling a display device includes the following steps:
step S10, receiving an instruction sent by display equipment, and analyzing the instruction to obtain an input parameter;
in the embodiment, the execution subject is a mobile terminal, which is also called a mobile communication terminal, and is a hardware device that has a computer infrastructure and can be used in mobility, and the computer infrastructure can be described by a von neumann structure, that is, the hardware device includes an arithmetic logic unit, a control circuit, a memory, and an input-output device; a display device, also called a display, is a device that can output an image or tactile information; the instruction is computer control information sent by the display equipment and is used for controlling the running state of the mobile terminal; the input parameters are control parameters contained in the command, and are used as the input for controlling the running state of the mobile terminal, and the command contains the input parameters.
In an actual application scenario, the control method of the display device in this embodiment may be applied to a mobile terminal, where the mobile terminal is a terminal device with strong computing capability, such as a smart phone, a notebook computer, and the like; the display apparatus includes a display device for displaying contents, for example, a smart tv having a display panel, a computer, a smart refrigerator, and the like. The mobile terminal and the display device can be interacted, namely, the mobile terminal and the display device can be in wired communication or wireless communication, the display device controls the mobile terminal to operate corresponding functions in a command sending mode, and the mobile terminal can generate corresponding output results according to the received commands and feed back the output results to the display device. Furthermore, the display device may enable the mobile terminal to assist the display device to work in a form of sending an instruction, for example, the mobile terminal may assist the display device to perform operations with high complexity, or the mobile terminal may assist the display device to perform inference of a neural network model, and the like, the display device may also enable the mobile terminal to serve as a camera to take pictures or record videos, the display device enables the mobile terminal to undertake a part of the work of the display device, generate a corresponding output result in the mobile terminal, feed back the generated output result to the display device, and the display device performs a corresponding post-processing operation according to the output result, for example, perform an operation of image rendering and displaying according to the output result fed back by the mobile terminal. It should be noted that, the work done by the mobile terminal assisting the display device can also be directly completed on the display device, in general, when a certain work is completely executed by the display device, the occupied resources are more, the requirement of the computing power is higher, and the computing resources and the computing power of the display device are limited, the display device can let the mobile terminal assist in processing a part of the work, so that the computing resources of the display device can be saved, the computing efficiency and the response rate can be improved, for example, when the display device needs to execute the AI function by using the neural network model, because the inference process of the neural network model has high computing complexity and the occupied computing resources are more, at this time, the mobile terminal can be allowed to assist the display device to execute the inference of the neural network model, generate the inference result on the mobile terminal and feed back to the display device, the display device only executes the post-processing operation according to the inference result, the assistance of the mobile terminal saves the computing resource of the display equipment and improves the computing efficiency and the response rate. Alternatively, when the display device needs to utilize a camera of the mobile terminal, the mobile terminal may also be used as a camera, for example, in a video call or in the inference of a neural network model that needs to take an image.
In this embodiment, the display device sends an instruction to the mobile terminal, the mobile terminal receives the instruction sent by the display device, and the mobile terminal analyzes the received instruction to obtain the input parameter. Specifically, referring to fig. 10, fig. 10 is a first interaction schematic diagram of the mobile terminal and the display device according to the embodiment of the present invention, as shown in fig. 10, an application program in the display device 01 sends an instruction to the mobile terminal 02, after the mobile terminal 02 receives the instruction sent by the display device 01, the instruction is analyzed to obtain an input parameter included in the instruction, and the instruction is analyzed and extracted from the instruction to obtain the input parameter included in the instruction, where the instruction may be selected as an AI function instruction, for example, a face recognition instruction, a target detection instruction, a 3D modeling instruction, or the like, and the input parameter may be an input parameter corresponding to the AI function instruction, for example, an input parameter of a face recognition function, an input parameter of a target detection function, or an input parameter of a 3D modeling function.
Step S20, determining output parameters corresponding to the input parameters through a neural network model;
in this embodiment, the neural network is a complex network system formed by widely interconnecting a large number of simple processing units, reflects many basic characteristics of human brain functions, and is a highly complex nonlinear dynamical learning system, the neural network model is a model built and trained by the neural network, and the neural network model can reason input parameters to generate an inference result; the output parameters are inference results obtained after the input parameters are inferred by the neural network model, and the output parameters are used for controlling the running state of the display equipment.
In an actual application scenario, the mobile terminal 02 may assist the display device 01 in reasoning the neural network model, the trained neural network model is stored in the mobile terminal 02, and after receiving the instruction sent by the display device 01 and analyzing the instruction to obtain the input parameter, the mobile terminal 02 inputs the input parameter into the neural network model for reasoning to obtain the output parameter corresponding to the input parameter.
In an actual application scenario, the instruction sent by the display device 01 includes an input parameter and an identity of a target application program of the display device, the mobile terminal 02 obtains the input parameter and the identity of the target application program by analyzing the instruction sent by the display device 01, inputs the input parameter to the neural network model corresponding to the identity to obtain an output parameter corresponding to the input parameter, wherein, the application program is a computer program for the display device 01 to complete one or more specific tasks, the display device 01 comprises various application programs, each application program in the display device 01 correspondingly completes one function, the target application program is an application program currently operated by the display device 01, for example, if the display device 01 performs a face recognition function, and an application program currently running on the display device 01 is a face recognition application program, the target application program is the face recognition application program; if the display device 01 executes the target detection function, and the currently running application program of the display device 01 is the target detection application program, the target application program is the target detection application program; if the display device 01 executes the 3D modeling function and the application program currently running on the display device 01 is the 3D modeling application program, the target application program is the 3D modeling application program. Each application in the display device 01 corresponds to an identity, which is identification information for distinguishing different applications, and is usually represented as a character string code. Each application program in the display device 01 corresponds to an input parameter, the mobile terminal 02 comprises a plurality of neural network models, different neural network models correspond to different application programs in the display device 01, namely different neural network models correspond to identity marks of different application programs, the display device 01 determines a target application program to be operated according to a function to be executed, and simultaneously acquires the identity mark of the target application program and the input parameter corresponding to the target application program, the display device 01 generates an instruction according to the identity mark and the input parameter of the target application program and sends the instruction to the mobile terminal 02, the mobile terminal 02 analyzes the instruction to obtain the input parameter and the identity mark of the target application program, so that the corresponding neural network model can be searched according to the identity mark of the target application program, and the input parameter is input into the neural network model corresponding to the identity mark for reasoning, and obtaining an output parameter corresponding to the input parameter. The output parameter in this embodiment may be selected as an output parameter corresponding to the AI function instruction, for example, face recognition information, target detection information, or 3D image information. Specifically, if the input parameters are corresponding to the face recognition application program, inputting the input parameters into a neural network model corresponding to the identity of the face recognition application program for reasoning to obtain face recognition information; if the input parameters are the input parameters corresponding to the target detection application program, shooting a detection image, and inputting the input parameters and the detection image into a neural network model corresponding to the identity of the target detection application program to perform reasoning to obtain target detection information; and if the input parameters are corresponding to the 3D modeling application program, shooting the 3D image, and inputting the 3D image and the 3D image into a neural network model corresponding to the identity of the 3D modeling application program for reasoning to obtain 3D image information.
Step S30, sending the output parameter to the display device, so that the display device executes an operation corresponding to the output parameter.
In this embodiment, the operation corresponding to the output parameter is an operation executed by the display device according to the output parameter, and after the mobile terminal obtains the output parameter, the mobile terminal sends the output parameter to the display device, so that the display device executes the operation corresponding to the output parameter.
In an actual application scenario, after obtaining the output parameters, the mobile terminal 02 sends the output parameters to the display device 01, and the display device 01 executes corresponding operations according to the output parameters. For example, when the output parameter is face recognition information, the mobile terminal 02 transmits the face recognition information to the display device 01, so that the display device 01 performs face recognition operation according to the face recognition information; when the output parameter is target detection information, the mobile terminal 02 sends the target detection information to the display device 01, so that the display device 01 executes a target detection operation according to the target detection information; when the output parameter is 3D image information, the mobile terminal 02 transmits the 3D image information to the display device 01 so that the display device 01 performs a 3D modeling operation according to the 3D image information. It should be noted that when the display device 01 only needs the mobile terminal 02 to serve as a camera, the mobile terminal 02 may directly send the captured image to the display device 01, for example, in a video call, the mobile terminal 02 sends the captured image to the display device 01 in real time to display, so as to play a large-screen role of the display device.
Further, in a practical application scenario, a plurality of mobile terminals may be used to assist the display device to reason about a plurality of neural network models, and if the display device needs to reason about a plurality of neural network models simultaneously, the display device may interact with a plurality of mobile terminals, specifically, referring to fig. 11, fig. 11 is a second schematic view of interaction between a mobile terminal and a display device according to an embodiment of the present invention, as shown in fig. 11, the display device 01 interacts with the mobile terminal 02, meanwhile, the display device 01 interacts with the mobile terminal 03, the display device 01 sends different instructions to the mobile terminal 02 and the mobile terminal 03, the mobile terminal 02 and the mobile terminal 03 analyze the instructions respectively to obtain the identity marks and the input parameters, the mobile terminal 02 and the mobile terminal 03 input the input parameters into the neural network models corresponding to the identity marks respectively to perform inference, output parameters corresponding to the input parameters are generated respectively and fed back to the display device 01. For example, the mobile terminal 02 assists the display device 01 to perform inference of a face recognition neural network model to generate face recognition information, and the mobile terminal 03 assists the display device 01 to perform inference of a target detection neural network model to generate target detection information. Of course, if the display device 01 has redundant computing resources, the display device 01 may also infer a lightweight neural network model by itself to maximize speed and efficiency. It can be understood that the number of the mobile terminals may be determined according to specific needs, and the embodiment does not limit this.
In the technical scheme provided by this embodiment, the mobile terminal 02 obtains the input parameter by receiving the instruction sent by the display device 01 and analyzing the instruction, determines the output parameter corresponding to the input parameter through the neural network model, and sends the output parameter to the display device 01, so that the display device 01 executes the operation corresponding to the output parameter. In the scheme, the display device 01 sends the input parameters to the mobile terminal 02 in the form of instructions, the output parameters corresponding to the input parameters are determined through a neural network model in the mobile terminal 02, the display device 01 does not execute the determination process of the output parameters, and the display device 01 only executes corresponding operations according to the output parameters returned by the mobile terminal 02, so that the computing resources of the display device 01 are not excessively occupied due to the determination of the output parameters, and the computing capability of the mobile terminal 02 is generally stronger than that of the display device, so that the computing efficiency and the accuracy are improved, the response rate of the display device 01 is further improved, and the problems of low computing efficiency and slow response rate of the display device when the data results are obtained by using the neural network model are solved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a second embodiment of the control method for a display device according to the present invention, and based on the first embodiment, before the step of S20, the method further includes:
step S40, receiving the initial model sent by the server;
in this embodiment, the server is a computer device with high-speed CPU computing capability, long-time reliable operation, strong I/O external data throughput capability, and better extensibility, and the initial model is a network model that has not been trained yet, and cannot be directly used for neural network model inference.
In an actual application scenario, a server interacts with a mobile terminal, that is, the server is in communication connection with the mobile terminal, the server sends an initial model to the mobile terminal, and the mobile terminal receives the initial model sent by the server. Specifically, referring to fig. 12, fig. 12 is a schematic diagram of a control system of a display device according to an embodiment of the present invention, as shown in fig. 12, the control system of the display device includes a display device 01, a mobile terminal 02, and a server 04, where an initial model is stored in the server 04, the server 04 sends the initial model to the mobile terminal 02, and the mobile terminal 02 receives the initial model sent by the server 04, where the display device 01 may be an intelligent television, and of course, may also be other LOT devices, and the LOT device is an internet of things device, such as an intelligent refrigerator and an intelligent door lock. The server 04 may be selected as an edge server or a cloud server. The initial model received by the mobile terminal 02 may be one or more.
Step S50, obtaining user use information of one or more intelligent devices;
in this embodiment, the intelligent device is a highly automated device, the intelligent device includes but is not limited to the display device 01, and the intelligent device interacts with the mobile terminal 02; the user usage information is a usage record generated by the user using the intelligent device, and is used for reflecting the usage condition of the intelligent device by the user, such as the usage duration of the user, the browsing record of the user, the preference information of the user, and the like.
In an actual application scenario, after the mobile terminal 02 acquires the initial model sent by the server 04, the mobile terminal acquires user usage information of one or more intelligent devices, specifically, when a user uses an intelligent device, the intelligent device generates the user usage information and stores the user usage information in a memory of the intelligent device, and since the intelligent device is in communication connection with the mobile terminal 02, the intelligent device sends the user usage information to the mobile terminal 02 at regular time, and the mobile terminal 02 can acquire the user usage information of the intelligent device at regular time. It will be appreciated that the smart device may periodically update its stored user usage information.
And step S60, training the initial model according to the user use information to obtain the neural network model.
In this embodiment, the neural network model is a model obtained by training an initial model.
In an actual application scenario, after the mobile terminal 02 acquires the user use information of the intelligent device, the initial model is trained according to the acquired user use information, and a neural network model is obtained.
In the technical scheme provided by this embodiment, the mobile terminal 02 obtains the user usage information of one or more intelligent devices by receiving the initial model sent by the server, and trains the initial model according to the user usage information to obtain the neural network model. According to the scheme, the neural network model is obtained based on the use information training of the user on the intelligent device, so that the neural network model is more fit with the actual situation of the user, and the obtained neural network model has more practical significance.
Referring to fig. 4, fig. 4 is a flowchart illustrating a third embodiment of a control method for a display device according to the present invention, where based on the second embodiment, the step of S60 includes:
step S61, determining gradient according to the user use information, and sending the gradient to a server, so that the server determines accumulated gradient according to the gradient sent by each mobile terminal;
in this embodiment, the gradient is a direction vector generated in the training of the neural network model, and the gradient is used to optimize the network structure of the neural network model, so as to improve the recognition accuracy of the neural network model; cumulative gradients are the result of the polymerization of different gradients.
In an actual application scenario, as shown in fig. 12, after obtaining the use information of the user, the mobile terminal 02 trains the initial neural network model according to the use information of the user to obtain a gradient, the mobile terminal 02 sends the gradient to the server 04, the server 04 can receive gradients generated by different mobile terminals, and the server 04 aggregates the gradients of the mobile terminals by using an aggregation algorithm to obtain an accumulated gradient.
Step S62, receiving the cumulative gradient fed back by the server, and setting the training parameters of the initial model according to the cumulative gradient to obtain a training model;
in this embodiment, the training parameters are parameters set when the initial model is trained, such as a learning rate, a sliding step length, and the like; the training model is an initial model after the training parameters are determined.
In an actual application scenario, after the server 04 aggregates the accumulated gradients to obtain the accumulated gradients, the accumulated gradients are fed back to the mobile terminal 02, the mobile terminal 02 receives the accumulated gradients fed back by the server 04, and training parameters of the initial model are set according to the accumulated gradients to obtain a training model.
And step S63, training the training model according to the user use information to obtain the neural network model.
In this embodiment, after the mobile terminal 02 obtains the training model, the mobile terminal 02 obtains the neural network model according to the training model obtained by the user use information training. Specifically, the mobile terminal 02 inputs the user usage information into a training model for training, calculates a loss function of the training model, determines whether the loss function of the training model converges, determines the training model with the converged loss function as a neural network model when the loss function of the training model converges, updates a training parameter of the training model when the loss function of the training model does not converge, uses the training model with the updated user usage information training parameter until the training model converges, and determines the converged training model as the neural network model.
In the technical scheme provided by the embodiment, the parameter updating and training of the model are realized by using the edge computing framework of federal learning, so that the privacy of a user is protected, the updating process of the model is transferred to the mobile terminal 02 and the server 04, and the computing resources of the display device 01 are saved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a fourth embodiment of a control method of a display device according to the present invention, where the control method of the display device is applied to the display device, and the control method of the display device includes the following steps:
step S70, acquiring input parameters;
in this embodiment, the execution main body is a display device, and the display device acquires the input parameters before sending the instruction to the mobile terminal.
In an actual application scenario, as shown in fig. 10, the display device 01 acquires input parameters. Specifically, the display device 01 obtains corresponding input parameters according to an AI function that needs to be executed, for example, if the display device 01 needs to execute a face recognition function, the display device 01 obtains input parameters corresponding to a face recognition application; if the display device 01 needs to execute the target detection function, the display device 01 obtains an input parameter corresponding to a target detection application program; if the display device 01 needs to execute the 3D modeling function, the display device 01 obtains an input parameter corresponding to the 3D modeling application.
Step S80, generating an instruction according to the input parameter, and sending the instruction to a mobile terminal, so that the mobile terminal obtains the input parameter according to the instruction, and obtains an output parameter corresponding to the input parameter through the input parameter and a neural network model;
in this embodiment, after acquiring the input parameters, the display device 01 generates a corresponding instruction according to the input parameters, and sends the generated instruction to the mobile terminal 02, so that the mobile terminal 02 acquires the input parameters according to the instruction, and obtains output parameters corresponding to the input parameters through the input parameters and the neural network model.
In an actual application scenario, after obtaining the input parameters corresponding to the application program, the display device 01 further obtains the identity of the application program, and generates an instruction for the input parameters corresponding to the application program and the identity code of the application program. For example, if the input parameters acquired by the display device 01 are input parameters corresponding to a face recognition application program, further acquiring an identity of the face recognition application program, and encoding the input parameters corresponding to the face recognition application program and the identity of the face recognition application program to generate a face recognition instruction; if the input parameters acquired by the display device 01 are the input parameters corresponding to the target detection application program, further acquiring an identity of the target detection application program, and generating a target detection instruction for the input parameters corresponding to the target detection application program and the identity code of the target detection application program; if the input parameters acquired by the display device 01 are the input parameters corresponding to the 3D modeling application program, further acquiring an identity of the 3D modeling application program, and encoding the input parameters corresponding to the 3D modeling application program and the identity of the 3D modeling application program to generate a 3D modeling instruction. The display device 01 sends the generated instruction to the mobile terminal 02, the mobile terminal 02 generates an output parameter corresponding to the input parameter according to the instruction and the neural network model, and the generation process of the output parameter may specifically refer to the content of the first embodiment, which is not described herein again.
And step S90, receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters.
In this embodiment, the display device 01 receives the output parameter fed back by the mobile terminal 02 and executes an operation corresponding to the output parameter.
In an actual application scenario, if the display device 01 receives the face identification information, such as the imbedding data of the face, which is fed back by the mobile terminal 02, the display device 01 searches the face image with the highest matching degree in the face database according to the received imbedding data of the face, and displays the face image with the highest matching degree after the face image with the highest matching degree is found in the display device 01. Furthermore, the display device 01 not only displays the searched face image with the highest matching degree, but also displays the specific matching degree, and expresses the specific matching degree in a percentage form, and of course, in other embodiments, the display form may be determined according to the actual situation, and is not limited herein; if the output parameter fed back by the mobile terminal 02 received by the display device 01 is target detection information, the target detection information includes the type of the target object and the position of the target object, and the target object is an object to be identified and can be selected from a person, a vehicle and the like. The display device 01 generates a target detection frame and a target type according to the target detection information, the target detection frame is used for displaying the position of the target object, and the target type is used for indicating the type of the target object. For example, a pedestrian in the image is detected, the target object is the pedestrian, the target detection frame is the position range of the pedestrian, and the target category is the person. The display device 01 generates a target detection image according to the target detection frame and the target type, the target detection image is an image which contains the target detection frame and displays the target type, and after the target detection frame and the target type are generated, the display device 01 maps the target detection frame and the target type to the shot detection image to generate the target detection image. It should be noted that, the mobile terminal 02 sends the output parameters to the display device 01, and at the same time sends the captured detection image to the display device 01, and after the display device 01 generates a target detection image, displays the target detection image, where the target detection image includes a target detection frame and a target category; if the display device 01 receives the output parameters fed back by the mobile terminal 02 as 3D image information, for example, 3D facial expression information, human skeleton joint information, etc., the display device 01 constructs a 3D model image, for example, a 3D cartoon character image, according to the 3D image information, and displays the 3D model image after the display device 01 generates the 3D model image, or develops a 3D application according to the 3D image. Such as a motion-sensing game.
In the technical scheme provided by this embodiment, the display device 01 obtains an input parameter, generates an instruction according to the input parameter, sends the instruction to the mobile terminal, receives an output parameter fed back by the mobile terminal, and executes an operation corresponding to the output parameter. In this way, the display device 01 only performs post-processing operation according to the output parameters fed back by the mobile terminal 02, thereby saving the computing resources and improving the computing efficiency.
Referring to fig. 6, fig. 6 is a flowchart illustrating a fifth embodiment of a control method for a display device according to the present invention, where based on the fourth embodiment, the step of S70 includes:
step S71, acquiring initial parameters to be operated and idle computing resources of the display device;
in this embodiment, the initial parameters are all parameters that the display device needs to operate to execute the AI function instruction, the initial parameters include input parameters, and the idle computing resources are computing resources available to the display device 01. In an actual application scene, the display device obtains initial parameters to be operated and idle computing resources of the display device.
Step S72, determining the input parameter in the initial parameter according to the computing resource, wherein the computing resource is inversely proportional to the computing resource occupied by the display device to calculate the input parameter.
In this embodiment, after the display device obtains the initial parameters to be operated and the idle computing resources, the input parameters are determined in the initial parameters according to the idle computing resources, where the computing resources are inversely proportional to the computing resources occupied by the operation of the input parameters by the display device. Specifically, the more idle computing resources, the less computing resources the display device occupies for computing the input parameters; the less computing resources are idle, the more computing resources are occupied by the display device to compute the input parameters.
In the technical scheme provided by the embodiment, the input parameters are determined by acquiring the initial parameters to be operated and the idle computing resources of the display device, so that the computing resources of the display device are effectively allocated and utilized, and the maximization of the computing efficiency is realized.
Referring to fig. 7, fig. 7 is a flowchart illustrating a sixth embodiment of a control method for a display device according to the present invention, where based on the fourth embodiment, the step of S90 includes:
step S91, receiving the output parameters fed back by the mobile terminal, and operating other parameters to obtain target parameters, wherein the other parameters and the input parameters form the initial parameters;
in this embodiment, referring to fig. 10, the other parameters are parameters except for an input parameter for the mobile terminal 02 to assist in processing when the display device 01 performs the AI function; the target parameter is a parameter obtained by the display device 01 operating other parameters.
In an actual application scenario, when the display device 01 executes an AI function, the input parameters are executed, and other parameters are also executed, where the other parameters are executed by the display device 01, and are not executed by the mobile terminal 02 to assist in processing, the other parameters and the input parameters form initial parameters, and the target parameters and the output parameters jointly determine the operation to be executed by the display device 01. Specifically, after receiving the output parameter of the mobile terminal 02, the display device 01 runs other parameters to obtain the target parameter.
Step S92, determining an operation to be performed by the display device according to the target parameter and the output parameter, and performing the operation.
In this embodiment, the operation to be performed is a post-processing operation that the display device 01 needs to perform after receiving the output parameter of the mobile terminal 02, for example, a post-processing operation of a face recognition function, a post-processing operation of a target detection function, or a post-processing operation of a 3D modeling function, where the post-processing operation may be specifically selected from rendering presentation of an image, model modeling of the image, and the like.
In an actual application scene, after the output parameter received by the display device 01 is face recognition information, the display device 01 operates other parameters corresponding to the face recognition function to obtain target parameters, and determines that the post-processing operation to be executed by the display device 01 is to search and display a face image matched with the face recognition information in a face database according to the target parameters and the face recognition information, and execute the post-processing operation; after the output parameter received by the display device 01 is the target detection information, the display device 01 operates other parameters corresponding to the target detection function to obtain a target parameter, determines, according to the target parameter and the target detection information, that the post-processing operation which the display device 01 needs to execute is to map the target detection information onto the target detection image, display the mapped target detection image, and execute the post-processing operation; after the output parameter received by the display device 01 is the 3D image information, the display device 01 operates other parameters corresponding to the 3D modeling function to obtain a target parameter, determines, according to the target parameter and the 3D image information, that the post-processing operation that the display device 01 needs to execute is the 3D modeling, and executes the post-processing operation.
In an actual application scenario, referring to fig. 11, the display device 01 may also interact with a plurality of mobile terminals, the display device 01 may receive output parameters fed back by each mobile terminal at the same time, determine an operation to be performed by the display device 01 according to the output parameters fed back by each mobile terminal, and perform the operation to be performed, for a single output parameter, the above-mentioned content is specifically referred to for a determination process of the operation to be performed, and details are not repeated here.
In the technical scheme provided by this embodiment, the operation to be executed by the display device 01 is determined according to the target parameter and the output parameter, and since the output parameter is provided by the mobile terminal 02, the display device 01 only needs to operate other parameters to obtain the target parameter, so that the computing resource of the display device 01 is not excessively occupied.
Referring to fig. 8, fig. 8 is a flowchart illustrating a seventh embodiment of a control method for a display device according to the present invention, where based on the fourth embodiment, the step of S80 includes:
step S81, determining the current running application program;
in this embodiment, the application program is a computer program for the display device to execute a corresponding function, and the corresponding AI function can be executed by running the application program.
In an actual application scenario, a variety of applications, such as a face recognition application, an object detection application, and a 3D modeling application, are preset in the display device. The user can execute the corresponding function of the application program by running the application program. The application program corresponds to application software, a user can operate the corresponding application program by clicking the application software to execute the corresponding function, and the display device can automatically detect the application software clicked by the user, so that the application program currently operated by the display device can be determined.
Step S82, determining an identity corresponding to the application program, generating an instruction according to the identity and the input parameter, and sending the instruction to the mobile terminal, so that the mobile terminal can obtain the input parameter according to the instruction, and obtain an output parameter corresponding to the input parameter through the input parameter and the neural network model.
In this embodiment, after determining the currently running application program, the display device further determines an identity of the currently running application program, and generates an instruction according to the currently running application program and the input parameter.
In an actual application scene, each application program in the display equipment corresponds to an identity, when a user clicks the application software and runs the corresponding application program, the display equipment automatically acquires the identity of the application program and encodes the identity and the input parameters to generate an instruction. For example, when the currently running application program is a face recognition function application program, the display device acquires an identity of the face recognition function application program, and encodes the identity of the face recognition application program and an input parameter of the face recognition function to generate a face recognition function instruction; when the currently running application program is the target detection function application program, acquiring an identity of the target detection function application program, and encoding the identity of the target detection function application program and an input parameter of a target detection function to generate a target detection function instruction; and when the currently running application program is the 3D modeling function application program, acquiring the identity of the 3D modeling function application program, and encoding the identity of the 3D modeling function application program and the input parameters of the 3D modeling function to generate a 3D modeling function instruction.
In the technical scheme provided by this embodiment, by determining the currently running application program and generating the instruction according to the identity and the input parameter of the currently running application program, the identity, the input parameter, and the application program are completely corresponding to each other, which is beneficial for the mobile terminal to correctly perform model reasoning and prevent the execution of wrong operations.
Referring to fig. 9, fig. 9 is a schematic flowchart of an eighth embodiment of a control method of a display device according to the present invention, where the control method of the display device is applied to a server, and the control method of the display device includes the following steps:
step S100, sending an initial model to each mobile terminal;
in this embodiment, the execution subject is a server, and the server may be a cloud server or an edge server. Compared with other servers, the edge server has higher privacy and is beneficial to protecting the privacy security of users, and the edge server is used for providing edge calculation and training and updating the model.
In an actual application scenario, referring to fig. 12, an initial model is pre-stored in server 04, server 04 transmits the initial model to mobile terminal 02, and server 04 may transmit one initial model or a plurality of initial models to mobile terminal 02. Furthermore, the server 04 and the mobile terminal 02 have different structures, the formats of the storage models are different, the format of the network model stored in the server 04 is not necessarily suitable for the storage requirement of the mobile terminal 02, after the server 04 acquires the preset network model, the server 04 can perform format conversion on the acquired network model to obtain an initial model, so that the format of the initial model is suitable for the mobile terminal 02, and then the initial model after format conversion is sent to the mobile terminal 02, wherein, during format conversion, corresponding format conversion frames, such as tensoflow lite [2], ncnn [3], mnn [4], and mace [5], can be selected.
Step S200, receiving gradients sent by each mobile terminal;
in this embodiment, after the server 04 sends the initial model to the mobile terminal 02, the mobile terminal 02 obtains a gradient according to the initial model, and sends the gradient to the server 04, and the server 04 receives the gradient of the mobile terminal 02.
Step S300, determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal, wherein the mobile terminal obtains a neural network model based on the accumulated gradient and the initial model, and determines an output parameter corresponding to an input parameter sent by a display device based on the neural network model, so that the display device executes an operation corresponding to the output parameter.
In this embodiment, after receiving the gradients of the mobile terminal, the server 04 aggregates the gradients to obtain an accumulated gradient by using an aggregation algorithm, and sends the accumulated gradient to the mobile terminal 02, so that the mobile terminal 02 obtains a neural network model based on the accumulated gradient and the initial model, and determines an output parameter corresponding to the input parameter sent by the display device 01 based on the neural network model, so that the display device 01 performs an operation corresponding to the output parameter. For a specific process, reference may be made to the contents of the fourth embodiment, which is not described herein again.
In the technical scheme provided by the embodiment, the cumulative gradient is determined by the edge server and a federal learning framework, so that the data privacy is protected, the training efficiency of the neural network model is improved, and the calculation efficiency and the response rate are further improved.
Based on the foregoing embodiments, the present invention further provides a mobile terminal, where the mobile terminal may include a memory, a processor, and a control program of a display device that is stored in the memory and is executable on the processor, and when the processor executes the control program of the display device, the steps of the method for controlling the display device according to any one of the foregoing embodiments are implemented.
Based on the foregoing embodiments, the present invention further provides a display device, where the display device may include a memory, a processor, and a control program of the display device, which is stored in the memory and is executable on the processor, and when the processor executes the control program of the display device, the steps of the control method of the display device according to any one of the foregoing embodiments are implemented.
Based on the foregoing embodiments, the present invention further provides a server, where the server may include a memory, a processor, and a control program of a display device that is stored in the memory and is executable on the processor, and when the processor executes the control program of the display device, the steps of the method for controlling the display device according to any one of the foregoing embodiments are implemented.
Based on the above embodiment, the present invention further provides a control system of a display device, where the control system of the display device includes:
the display equipment is used for acquiring input parameters; generating an instruction according to the input parameters, and sending the instruction to the mobile terminal; receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters;
the mobile terminal is used for receiving the instruction sent by the display equipment and analyzing the instruction to obtain an input parameter; determining an output parameter corresponding to the input parameter through a neural network model; sending the output parameters to the display device;
the server is used for sending the initial model to each mobile terminal; receiving gradients sent by each mobile terminal; and determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal so that the mobile terminal can obtain a neural network model according to the accumulated gradient and the initial model.
Based on the above embodiments, the present invention also provides a computer readable storage medium, on which a control program of a display device is stored, the control program of the display device implementing the steps of the control method of the display device according to any one of the above embodiments when executed by a processor.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (16)

1. A control method of a display device is applied to a mobile terminal, and comprises the following steps:
receiving an instruction sent by display equipment, and analyzing the instruction to obtain an input parameter;
determining an output parameter corresponding to the input parameter through a neural network model;
and sending the output parameters to the display equipment so that the display equipment executes the operation corresponding to the output parameters.
2. The method as claimed in claim 1, wherein the command includes the input parameter and an identification of a target application of the display device, and the step of determining the output parameter corresponding to the input parameter through the neural network model includes:
and inputting the input parameters into the neural network model corresponding to the identity identification to obtain output parameters corresponding to the input parameters, wherein the identity identification and the input parameters are obtained by analyzing the instruction.
3. The method for controlling a display device according to claim 1, wherein the step of determining the output parameter corresponding to the input parameter by the neural network model is preceded by the step of:
receiving an initial model sent by a server;
acquiring user use information of one or more intelligent devices;
and training the initial model according to the user use information to obtain the neural network model.
4. The method of claim 3, wherein the step of training the initial model according to the user usage information to obtain the neural network model comprises:
determining a gradient according to the user use information, and sending the gradient to a server so that the server determines an accumulated gradient according to the gradient sent by each mobile terminal;
receiving the accumulated gradient fed back by the server, and setting the training parameters of the initial model according to the accumulated gradient to obtain a training model;
and training the training model according to the user use information to obtain the neural network model.
5. A control method of a display device, the control method of the display device being applied to a display device, the control method of the display device comprising:
acquiring an input parameter;
generating an instruction according to the input parameter, and sending the instruction to a mobile terminal so that the mobile terminal can obtain the input parameter according to the instruction, and obtain an output parameter corresponding to the input parameter through the input parameter and a neural network model;
and receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters.
6. The method of controlling a display device according to claim 5, wherein the step of acquiring the input parameter includes:
acquiring initial parameters to be operated and idle computing resources of the display equipment;
determining the input parameter in the initial parameter according to the computing resource, wherein the computing resource is inversely proportional to the computing resource occupied by the display device for computing the input parameter.
7. The method for controlling a display device according to claim 5, wherein the performing the operation corresponding to the output parameter includes:
operating other parameters to obtain target parameters, wherein the other parameters and the input parameters form the initial parameters;
and determining the operation to be executed by the display equipment according to the target parameter and the output parameter, and executing the operation.
8. The method of controlling a display device according to claim 5, wherein the step of generating an instruction according to the input parameter includes:
determining a currently running application program;
and determining an identity corresponding to the application program, and generating an instruction according to the identity and the input parameters.
9. The method for controlling a display device according to claim 5, wherein the step of receiving the output parameter fed back by the mobile terminal and performing an operation corresponding to the output parameter comprises:
and determining the operation to be executed by the display equipment according to the output parameters fed back by each mobile terminal, and executing the operation.
10. A control method of a display device, the control method of the display device being applied to a server, the control method of the display device comprising:
sending an initial model to each mobile terminal;
receiving gradients sent by each mobile terminal;
and determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal, wherein the mobile terminal obtains a neural network model based on the accumulated gradient and the initial model, and determines an output parameter corresponding to an input parameter sent by a display device based on the neural network model, so that the display device executes an operation corresponding to the output parameter.
11. The method of controlling a display device according to claim 10, wherein the step of transmitting the initial model to each mobile terminal comprises:
acquiring a preset model, and performing format conversion on the preset model to obtain an initial model;
and sending the initial model to each mobile terminal.
12. A mobile terminal, characterized in that the mobile terminal comprises a memory, a processor and a control program of a display device stored on the memory and operable on the processor, the control program of the display device realizing the steps of the control method of the display device according to any one of claims 1 to 4 when executed by the processor.
13. A display device comprising a memory, a processor, and a control program of the display device stored on the memory and executable on the processor, the control program of the display device implementing the steps of the control method of the display device according to any one of claims 5 to 9 when executed by the processor.
14. A server, characterized in that the server comprises a memory, a processor, and a control program of a display device stored on the memory and executable on the processor, the control program of the display device implementing the steps of the control method of the display device according to claim 10 or 11 when executed by the processor.
15. A control system of a display apparatus, the control system of the display apparatus comprising:
the display equipment is used for acquiring input parameters; generating an instruction according to the input parameters, and sending the instruction to the mobile terminal; receiving the output parameters fed back by the mobile terminal, and executing the operation corresponding to the output parameters;
the mobile terminal is used for receiving the instruction sent by the display equipment and analyzing the instruction to obtain an input parameter; determining an output parameter corresponding to the input parameter through a neural network model; sending the output parameters to the display device;
the server is used for sending the initial model to each mobile terminal; receiving gradients sent by each mobile terminal; and determining an accumulated gradient according to each gradient, and sending the accumulated gradient to each mobile terminal so that the mobile terminal can obtain a neural network model according to the accumulated gradient and the initial model.
16. A computer-readable storage medium, characterized in that a control program of a display device is stored thereon, which when executed by a processor implements the steps of the control method of the display device according to any one of claims 1-11.
CN202011500919.9A 2020-12-17 2020-12-17 Control method and system of display device and computer readable storage medium Pending CN112486691A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011500919.9A CN112486691A (en) 2020-12-17 2020-12-17 Control method and system of display device and computer readable storage medium
PCT/CN2021/132038 WO2022127522A1 (en) 2020-12-17 2021-11-22 Control method and system for display device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011500919.9A CN112486691A (en) 2020-12-17 2020-12-17 Control method and system of display device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112486691A true CN112486691A (en) 2021-03-12

Family

ID=74914125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011500919.9A Pending CN112486691A (en) 2020-12-17 2020-12-17 Control method and system of display device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112486691A (en)
WO (1) WO2022127522A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434190A (en) * 2021-06-30 2021-09-24 青岛海尔科技有限公司 Data processing method and device, storage medium and electronic equipment
WO2022127522A1 (en) * 2020-12-17 2022-06-23 深圳Tcl新技术有限公司 Control method and system for display device, and computer-readable storage medium
CN116541228A (en) * 2023-07-06 2023-08-04 深圳市彤兴电子有限公司 Touch response detection method and device for display and computer equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116383620B (en) * 2023-03-29 2023-10-20 北京鹅厂科技有限公司 Method and device for applying multi-mode artificial intelligence

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752369B (en) * 2012-06-04 2015-11-25 北京联诚智胜信息技术有限公司 The supplying method of TV applications service and virtual content service platform
CN106302219A (en) * 2016-08-08 2017-01-04 深圳市东方时代新媒体有限公司 The method and system that the game of a kind of severe presents at intelligent television
US20180144244A1 (en) * 2016-11-23 2018-05-24 Vital Images, Inc. Distributed clinical workflow training of deep learning neural networks
CN107656443A (en) * 2017-09-18 2018-02-02 成都易慧家科技有限公司 A kind of intelligent home control system and method based on deep learning
CN110572469B (en) * 2019-09-18 2022-04-12 江苏视博云信息技术有限公司 Data transmission method, input device, cloud server and cloud game system
CN112486691A (en) * 2020-12-17 2021-03-12 深圳Tcl新技术有限公司 Control method and system of display device and computer readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127522A1 (en) * 2020-12-17 2022-06-23 深圳Tcl新技术有限公司 Control method and system for display device, and computer-readable storage medium
CN113434190A (en) * 2021-06-30 2021-09-24 青岛海尔科技有限公司 Data processing method and device, storage medium and electronic equipment
CN113434190B (en) * 2021-06-30 2023-06-16 青岛海尔科技有限公司 Data processing method and device, storage medium and electronic equipment
CN116541228A (en) * 2023-07-06 2023-08-04 深圳市彤兴电子有限公司 Touch response detection method and device for display and computer equipment
CN116541228B (en) * 2023-07-06 2024-01-19 深圳市彤兴电子有限公司 Touch response detection method and device for display and computer equipment

Also Published As

Publication number Publication date
WO2022127522A1 (en) 2022-06-23

Similar Documents

Publication Publication Date Title
CN112486691A (en) Control method and system of display device and computer readable storage medium
US11763541B2 (en) Target detection method and apparatus, model training method and apparatus, device, and storage medium
EP3885935A1 (en) Image questioning and answering method, apparatus, device and storage medium
JP6214738B2 (en) Method and apparatus for acquiring semantic tag of digital image
US11917288B2 (en) Image processing method and apparatus
CN110135249B (en) Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
KR102056806B1 (en) Terminal and server providing a video call service
CN111027403A (en) Gesture estimation method, device, equipment and computer readable storage medium
CN111556278A (en) Video processing method, video display device and storage medium
CN112232293A (en) Image processing model training method, image processing method and related equipment
US20210014124A1 (en) Feature-based network embedding
CN112052960A (en) Longitudinal federal modeling method, device, equipment and computer readable storage medium
US20230401799A1 (en) Augmented reality method and related device
CN110796266B (en) Method, device and storage medium for implementing reinforcement learning based on public information
CN113128526B (en) Image recognition method and device, electronic equipment and computer-readable storage medium
CN112257645B (en) Method and device for positioning key points of face, storage medium and electronic device
CN113627421A (en) Image processing method, model training method and related equipment
CN117216710A (en) Multi-mode automatic labeling method, training method of labeling model and related equipment
CN116862951A (en) Transformer-based lightweight target identification and tracking system and method
CN113591885A (en) Target detection model training method, device and computer storage medium
CN114935341B (en) Novel SLAM navigation computation video identification method and device
CN115563334A (en) Method and processor for processing image-text data
CN113347192A (en) Information security processing method based on deep learning and big data and cloud computing system
CN115862054A (en) Image data processing method, apparatus, device and medium
CN113223121A (en) Video generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination