CN112019931A - Man-machine interaction method and device of smart television, smart television and storage medium - Google Patents

Man-machine interaction method and device of smart television, smart television and storage medium Download PDF

Info

Publication number
CN112019931A
CN112019931A CN202010878409.9A CN202010878409A CN112019931A CN 112019931 A CN112019931 A CN 112019931A CN 202010878409 A CN202010878409 A CN 202010878409A CN 112019931 A CN112019931 A CN 112019931A
Authority
CN
China
Prior art keywords
user
user image
intelligent television
image
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010878409.9A
Other languages
Chinese (zh)
Other versions
CN112019931B (en
Inventor
何腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co Ltd filed Critical Shenzhen Skyworth RGB Electronics Co Ltd
Priority to CN202010878409.9A priority Critical patent/CN112019931B/en
Publication of CN112019931A publication Critical patent/CN112019931A/en
Application granted granted Critical
Publication of CN112019931B publication Critical patent/CN112019931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Abstract

The embodiment of the invention discloses a man-machine interaction method and device for an intelligent television, the intelligent television and a storage medium. The method comprises the following steps: acquiring first voice information input by a user; activating a camera of the smart television according to the first voice information; acquiring a user image in real time through a camera; and displaying the user image to the user through a screen of the intelligent television. The technical scheme provided by the embodiment of the invention realizes the function of the smart television as the mirror, can replace the traditional mirror, is more convenient for users to use compared with mobile terminals such as mobile phones or tablet computers, and can obtain clear and complete images more easily so as to meet various using requirements of users.

Description

Man-machine interaction method and device of smart television, smart television and storage medium
Technical Field
The embodiment of the invention relates to the technical field of human-computer interaction, in particular to a human-computer interaction method and device for an intelligent television, the intelligent television and a storage medium.
Background
In the aspect of traditional dressing and making up, the mirror is the furniture that every family is indispensable, but ordinary mirror function is comparatively single, and along with intelligent electronic product's development, people are more and more abundant to the demand of the implementation of mirror function, and ordinary mirror no longer satisfies easily.
At present, the function of the mirror is usually realized through a mobile terminal such as a mobile phone or a tablet computer, and a user can obtain a real-time image of the user through a front camera on the mobile terminal, namely, the function similar to the mirror is realized. However, the screen of the mobile terminal is usually small, and the displayed image is also small, which is very inconvenient for the user to use, and at the same time, the image is limited by the pixels of the camera of the mobile terminal, so that the image is difficult to reach the definition required by the user.
Disclosure of Invention
The embodiment of the invention provides a man-machine interaction method and device of an intelligent television, the intelligent television and a storage medium, and aims to realize a mirror function on the intelligent television.
In a first aspect, an embodiment of the present invention provides a human-computer interaction method for a smart television, where the method includes:
acquiring first voice information input by a user;
activating a camera of the smart television according to the first voice information;
acquiring a user image in real time through the camera;
and displaying the user image to a user through a screen of the intelligent television.
In a second aspect, an embodiment of the present invention further provides a human-computer interaction device for a smart television, where the device includes:
the first information acquisition module is used for acquiring first voice information input by a user;
the camera activating module is used for activating a camera of the intelligent television according to the first voice information;
the image acquisition module is used for acquiring a user image in real time through the camera;
and the image display module is used for displaying the user image to a user through the screen of the intelligent television.
In a third aspect, an embodiment of the present invention further provides a smart television, where the smart television includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the human-computer interaction method of the smart television provided by any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the human-computer interaction method for a smart television provided in any embodiment of the present invention.
The embodiment of the invention provides a man-machine interaction method of an intelligent television, which is characterized in that a camera of the intelligent television is activated when first voice information input by a user is acquired, a user image is acquired in real time through the camera and then displayed to the user, and the function of the intelligent television as a mirror is realized.
Drawings
Fig. 1 is a flowchart of a human-computer interaction method of a smart television according to an embodiment of the present invention;
fig. 2 is a flowchart of a human-computer interaction method of an intelligent television according to a second embodiment of the present invention;
fig. 3 is a flowchart of a human-computer interaction method of an intelligent television according to a third embodiment of the present invention;
fig. 4 is a flowchart of a man-machine interaction method of an intelligent television according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a human-computer interaction device of an intelligent television according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an intelligent television according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a human-computer interaction method of an intelligent television according to an embodiment of the present invention. The embodiment is applicable to the case that the smart television is used as a mirror, and the method can be executed by the human-computer interaction device of the smart television provided by the embodiment of the invention, and the device can be realized by hardware and/or software, and can be generally integrated in the smart television. As shown in fig. 1, the method specifically comprises the following steps:
and S11, acquiring the first voice information input by the user.
Specifically, when the user needs to use the mirror function of the smart television, the user can send a voice-form starting instruction to the smart television, so that the smart television can enter the mirror mode from a standby mode or a general use mode when acquiring the first voice message corresponding to the starting instruction.
And S12, activating a camera of the intelligent television according to the first voice information.
The camera can be a built-in camera in the smart television or an external common camera, and is bound with the smart television in a wireless internet access (Wi-Fi) or Bluetooth mode, so that the required mirror function can be realized by the conventional smart television conveniently, and the camera used in the embodiment can provide better definition compared with the limitation of the camera on a mobile terminal such as a mobile phone or a tablet personal computer.
Specifically, a voice library containing a voice starting instruction can be preset before the mirror function of the smart television is used, when first voice information input by a user is acquired, the first voice information can be matched in the voice library, if the voice starting instruction is successfully matched, a camera of the smart television is activated to acquire an image of the user, and when the mirror function is not used, the camera can be in a closed state to reduce consumption. Wherein, for example, the voice-activated command can be "mirror mode" or "open mirror" or the like. The intelligent television is controlled to enter the mirror mode through voice, so that a user can control the camera of the intelligent television to be opened at any position near the intelligent television, the mirror mode possibly needs a series of starting processes, the intelligent television can be prepared to enter the mirror mode in advance, and waiting time of the user is saved.
Optionally, on the basis of controlling the smart television by voice, the smart television can be controlled to enter a mirror mode by using modes such as temperature control or remote control, specifically, when the user feels the human body temperature or receives a remote control signal corresponding to a remote controller, the camera of the smart television is activated to realize multiple control modes, and convenience is brought to the user.
And S13, acquiring the user image in real time through the camera.
Specifically, after the camera is activated, the user image can be shot through the camera, and the configuration of the user on parameters such as the frame rate, the code stream and the resolution ratio of the camera can be received, so that the user image shot by the camera is perceived by human eyes to be real-time or close to real-time, the delay of user image display is reduced, and the user experience is improved.
And S14, displaying the user image to the user through the screen of the intelligent television.
Specifically, after the camera acquires the user image in real time, the user image can be transmitted to the smart television, and then the transmitted user image is decoded by the smart television and displayed through a screen of the smart television, so that the user can see the real-time image of the user, and the mirror function of the smart television is realized.
According to the technical scheme provided by the embodiment of the invention, the camera of the intelligent television is activated when the first voice information input by the user is acquired, the image of the user is acquired in real time through the camera and then displayed to the user, so that the function of the intelligent television as a mirror is realized.
Example two
Fig. 2 is a flowchart of a human-computer interaction method of an intelligent television according to a second embodiment of the present invention. The technical scheme of the embodiment is further refined on the basis of the technical scheme, optionally, on the basis of realizing the function of the mirror, the clothes can be matched with the image of the user, and the image is displayed for the user to provide matching suggestions. Specifically, in this embodiment, after displaying the user image to the user through the screen of the smart television, the method further includes: matching at least one target garment in a garment library according to the user image; fitting the target clothes at corresponding positions on the user image; and displaying the attached user image to the user through a screen. Correspondingly, as shown in fig. 2, the method specifically includes the following steps:
and S21, acquiring the first voice information input by the user.
And S22, activating a camera of the intelligent television according to the first voice information.
And S23, acquiring the user image in real time through the camera.
And S24, displaying the user image to the user through the screen of the intelligent television.
And S25, matching at least one target dress in the dress library according to the user image.
And S26, fitting the target clothes at the corresponding positions on the user image.
And S27, displaying the attached user image to the user through a screen.
Specifically, after the user image is displayed to the user through the screen of the smart television, the corresponding recommended target clothes can be matched in the preset clothes library according to the features of the user image, such as the stature proportion, the skin color, the face shape and the like, then the target clothes can be attached to the corresponding position on the user image in a normal wearing mode in real time along with the change of the user image, and the attached user image is displayed to the user through the screen of the smart television. The target clothes can be all clothes meeting the recommendation conditions, can be automatically replaced according to a preset period, and can also be replaced by receiving a clothes replacing instruction of a user so as to show different target clothes matches to the user, wherein the clothes replacing instruction can be an instruction input in a non-touch gesture, a touch gesture or voice mode and the like. Optionally, before at least one target garment is matched in the garment library according to the user image, an opening instruction of the user for the matched garment function can be received, so that a dressing matching suggestion is provided for the user according to the user requirement.
On the basis of the above technical solution, optionally, before matching at least one target apparel in the apparel library according to the user image, the method further includes: acquiring current weather information; correspondingly, matching at least one target dress in the dress library according to the user image comprises the following steps: at least one target apparel is matched in the apparel library according to the user image and the current weather information.
Specifically, the current weather information can be acquired from a web end through networking, and can also be acquired through any weather application program in joint debugging. After the current weather information is obtained, the target clothes can be determined according to the current weather information and the characteristics of the figure proportion, the skin color, the face shape and the like in the user image, specifically, the clothes range in the clothes library in due seasons can be determined according to the current weather information, and then the corresponding recommended target clothes are matched from the clothes range according to the user image, so that the relatively complex times of matching with the user image are reduced. The recommended target clothes are determined according to the current weather information, so that the clothes dressing requirement of people is met better.
According to the technical scheme provided by the embodiment of the invention, the recommended target clothes are matched in the clothes library according to the user image, the target clothes are attached to the corresponding position on the user image, and the attached user image is displayed for the user, so that clothes matching suggestions are provided for the user more truly, and the user can determine a favorite matching mode better according to the upper body effect of the target clothes in the screen.
EXAMPLE III
Fig. 3 is a flowchart of a human-computer interaction method of an intelligent television according to a third embodiment of the present invention. The technical scheme of the embodiment is further refined on the basis of the technical scheme, and optionally, on the basis of realizing the mirror function, the makeup can be added to the image of the user and presented to the user to provide makeup suggestions. Specifically, in this embodiment, after displaying the user image to the user through the screen of the smart television, the method further includes: matching at least one target makeup in a makeup base according to the user image; adding the target makeup to the corresponding location on the user's image; and displaying the added user image to the user through a screen. Correspondingly, as shown in fig. 3, the method specifically includes the following steps:
and S31, acquiring the first voice information input by the user.
And S32, activating a camera of the intelligent television according to the first voice information.
And S33, acquiring the user image in real time through the camera.
And S34, displaying the user image to the user through the screen of the intelligent television.
And S35, matching at least one target makeup in the makeup base according to the user image.
And S36, adding the target makeup to the corresponding position on the user image.
And S37, displaying the added user image to the user through a screen.
Specifically, after the user image is displayed to the user through the screen of the smart television, the corresponding recommended target makeup can be matched in the preset makeup library according to the characteristics of the skin color of the face, the shape of five sense organs, the face shape and the like in the user image, then the target makeup can be added to the corresponding position on the user image in a normal makeup mode in real time along with the change of the user image, and the added user image is displayed to the user through the screen of the smart television. The target makeup can be all makeup which meets the recommended conditions, can be automatically replaced according to a preset period, and can also be replaced by receiving a makeup replacement instruction of the user so as to show different target makeup appearances to the user, wherein the makeup replacement instruction can be an instruction input in the form of a non-touch gesture, a touch gesture or voice. Optionally, before at least one target makeup is matched in the makeup repository according to the user image, an opening instruction of the user for the makeup recommendation can be received, so that a makeup suggestion can be provided for the user according to the user requirement.
According to the technical scheme provided by the embodiment of the invention, the recommended target makeup is matched in the makeup base according to the user image, the target makeup is added to the corresponding position on the user image, and the added user image is displayed for the user, so that the proposal of the makeup is truly provided for the user, and the user can better determine the makeup suitable for the user according to the display effect of the target makeup in the screen.
Example four
Fig. 4 is a flowchart of a man-machine interaction method of an intelligent television according to a fourth embodiment of the present invention. The technical scheme of the embodiment is further refined on the basis of the technical scheme, and optionally, on the basis of realizing the mirror or on the basis of realizing clothes recommendation or dressing recommendation, the user image displayed in the screen can be photographed so as to be retained. Specifically, in this embodiment, after displaying the user image to the user through the screen of the smart television, the method further includes: acquiring second voice information and/or first gesture information input by a user; and taking a picture of the user image currently shown in the screen according to the second voice information and/or the first gesture information. Correspondingly, as shown in fig. 4, the method may specifically include the following steps:
and S41, acquiring the first voice information input by the user.
And S42, activating a camera of the intelligent television according to the first voice information.
And S43, acquiring the user image in real time through the camera.
And S44, displaying the user image to the user through the screen of the intelligent television.
And S45, acquiring the second voice information and/or the first gesture information input by the user.
And S46, taking a picture of the user image currently shown in the screen according to the second voice information and/or the first gesture information.
Specifically, when the user needs to use the photographing function in the smart television, the user can input second voice information and/or first gesture information to the smart television, so that the smart television can photograph the user image currently displayed in the screen when the second voice information and/or the first gesture information are acquired. The voice library can further comprise a voice photographing instruction, when second voice information input by a user is acquired, the second voice information can be matched in the voice library, and if the voice photographing instruction is successfully matched, the user image is photographed. Or a gesture library containing a gesture photographing instruction is preset before the photographing function of the smart television is used, when the first gesture information input by the user is acquired, the first gesture information can be matched in the gesture library, and if the gesture photographing instruction is successfully matched, the user image is photographed. And if the matching is successful in the voice library, entering a photographing mode to prepare for photographing, and then photographing the user image when the first gesture information input by the user is acquired and successfully matched in the gesture library. For example, the voice photographing instruction may be "photograph", "jean", or "eggplant", and the gesture photographing instruction may be "scissor hand", etc.
Optionally, after the user image currently displayed in the screen is photographed according to the second voice information and/or the first gesture information, the method further includes: and saving the shot pictures to the local and/or uploading the pictures to a server. Specifically, after the user image is photographed, the photographed picture can be directly stored in a local image library of the smart television, so that the user can directly view all the pictures photographed once in the smart television, and the photographed picture can also be uploaded to a cloud server, so that the user can conveniently obtain the required pictures from the server through other mobile terminal devices, and meanwhile, only a preset number of pictures can be stored in the smart television according to the photographing time of the pictures, so that the requirement of the user on viewing the latest pictures is met, and the storage resource of the smart television is also saved.
Optionally, after saving the taken photo locally and/or uploading the photo to the server, the method further includes: acquiring third voice information and/or second gesture information input by a user; and uploading the shot picture to the social platform according to the third voice information and/or the second gesture information. Specifically, a common social platform can be logged in advance in the process of using the smart television, and then when the photographing function of the smart television is used, a picture is saved or uploaded once, so that a currently photographed picture can be directly uploaded to the corresponding social platform according to the requirements of a user. The voice library may further include a voice uploading instruction, and when the third voice information is successfully matched with the voice uploading instruction in the voice library, the currently-taken photo is uploaded to the corresponding social platform. Or the gesture library may further include a gesture uploading instruction, and when the second gesture information is successfully matched with the gesture uploading instruction in the gesture library, the currently-taken picture is uploaded to the corresponding social platform. For example, the voice upload instruction may be "OK" or "upload" or the like, and the gesture upload instruction may be "point up" or the like.
According to the technical scheme provided by the embodiment of the invention, the user image currently displayed in the screen of the intelligent television is photographed according to the acquired voice information or gesture information input by the user, so that the photographed picture can be stored or uploaded to be conveniently checked by the user, the photographing function of the intelligent television is realized, and more requirements of the user are met.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a human-computer interaction device of an intelligent television according to a fifth embodiment of the present invention. The device can be realized by hardware and/or software, and can be generally integrated in a smart television. As shown in fig. 5, the apparatus includes:
a first information obtaining module 51, configured to obtain first voice information input by a user;
the camera activating module 52 is configured to activate a camera of the smart television according to the first voice information;
an image obtaining module 53, configured to obtain a user image in real time through a camera;
and an image display module 54, configured to display the user image to the user through the screen of the smart television.
According to the technical scheme provided by the embodiment of the invention, the camera of the intelligent television is activated when the first voice information input by the user is acquired, the image of the user is acquired in real time through the camera and then displayed to the user, so that the function of the intelligent television as a mirror is realized.
On the basis of the above technical solution, optionally, the human-computer interaction device of the smart television further includes:
the target clothing matching module is used for matching at least one target clothing in the clothing library according to the user image after the user image is displayed to the user through the screen of the smart television;
the target clothing fitting module is used for fitting the target clothing at the corresponding position on the user image;
and the target clothing display module is used for displaying the attached user image to the user through a screen.
On the basis of the above technical solution, optionally, the human-computer interaction device of the smart television further includes:
the weather information acquisition module is used for acquiring current weather information before at least one target garment is matched in the garment library according to the user image;
correspondingly, the target clothing matching module is specifically configured to:
at least one target apparel is matched in the apparel library according to the user image and the current weather information.
On the basis of the above technical solution, optionally, the human-computer interaction device of the smart television further includes:
the target makeup matching module is used for matching at least one target makeup in the makeup library according to the user image after the user image is displayed to the user through the screen of the smart television;
a target makeup adding module for adding a target makeup to a corresponding position on the user image;
and the target makeup display module is used for displaying the added user image to the user through the screen.
On the basis of the above technical solution, optionally, the human-computer interaction device of the smart television further includes:
the second information acquisition module is used for acquiring second voice information and/or first gesture information input by a user after the user image is displayed to the user through a screen of the smart television;
and the photographing module is used for photographing the user image currently displayed in the screen according to the second voice information and/or the first gesture information.
On the basis of the above technical solution, optionally, the human-computer interaction device of the smart television further includes:
and the picture storage module is used for storing the shot picture to the local and/or uploading the shot picture to the server after the picture of the user currently shown in the screen is shot according to the second voice information and/or the first gesture information.
On the basis of the above technical solution, optionally, the human-computer interaction device of the smart television further includes:
the third information acquisition module is used for acquiring third voice information and/or second gesture information input by a user after the shot picture is stored locally and/or uploaded to the server;
and the photo uploading module is used for uploading the shot photos to the social platform according to the third voice information and/or the second gesture information.
The man-machine interaction device of the intelligent television provided by the embodiment of the invention can execute the man-machine interaction method of the intelligent television provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the human-computer interaction device for an intelligent television, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
EXAMPLE six
Fig. 6 is a schematic structural diagram of an intelligent television according to a sixth embodiment of the present invention, and shows a block diagram of an exemplary intelligent television suitable for implementing an embodiment of the present invention. The smart tv shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention. As shown in fig. 6, the smart tv includes a processor 61, a memory 62, an input device 63 and an output device 64; the number of the processors 61 in the smart tv may be one or more, one processor 61 is taken as an example in fig. 6, the processor 61, the memory 62, the input device 63 and the output device 64 in the smart tv may be connected by a bus or in other manners, and the connection by the bus is taken as an example in fig. 6.
The memory 62 is used as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the man-machine interaction method of the smart television in the embodiment of the present invention (for example, the first information obtaining module 51, the camera activating module 52, the image obtaining module 53, and the image displaying module 54 in the man-machine interaction device of the smart television). The processor 61 executes various functional applications and data processing of the smart television by running software programs, instructions and modules stored in the memory 62, so as to implement the above-mentioned human-computer interaction method of the smart television.
The memory 62 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the smart tv, and the like. Further, the memory 62 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 62 may further include a memory remotely located from the processor 61, which may be connected to the smart tv through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 63 may be used to acquire voice information input by a user, and to generate key signal inputs related to user settings and function control of the smart tv, and the like. The output device 64 may include a screen or the like that may be used to present the user with the user's images captured in real time.
EXAMPLE seven
The seventh embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a method for man-machine interaction of a smart television, and the method includes:
acquiring first voice information input by a user;
activating a camera of the smart television according to the first voice information;
acquiring a user image in real time through a camera;
and displaying the user image to the user through a screen of the intelligent television.
The storage medium may be any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in the computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide the program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided by the embodiment of the present invention includes computer-executable instructions, where the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the human-computer interaction method for a smart television provided by any embodiment of the present invention.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A man-machine interaction method of an intelligent television is characterized by comprising the following steps:
acquiring first voice information input by a user;
activating a camera of the smart television according to the first voice information;
acquiring a user image in real time through the camera;
and displaying the user image to a user through a screen of the intelligent television.
2. The human-computer interaction method of the intelligent television set according to claim 1, wherein after the user image is presented to the user through the screen of the intelligent television set, the method further comprises:
matching at least one target garment in a garment library according to the user image;
fitting the target clothes at corresponding positions on the user image;
and displaying the attached user image to a user through the screen.
3. The human-computer interaction method of the smart television set as claimed in claim 2, wherein before the matching of at least one target dress in a dress library according to the user image, the method further comprises:
acquiring current weather information;
correspondingly, the matching at least one target dress in a dress library according to the user image comprises the following steps:
matching at least one target garment in the garment library according to the user image and the current weather information.
4. The human-computer interaction method of the intelligent television set according to claim 1, wherein after the user image is presented to the user through the screen of the intelligent television set, the method further comprises:
matching at least one target makeup in a makeup base according to the user image;
adding the target makeup to a corresponding location on the user image;
and displaying the added user image to a user through the screen.
5. The human-computer interaction method for the intelligent television as claimed in any one of claims 1 to 4, wherein after the user image is presented to the user through the screen of the intelligent television, the method further comprises:
acquiring second voice information and/or first gesture information input by a user;
and photographing the user image currently displayed in the screen according to the second voice information and/or the first gesture information.
6. The human-computer interaction method of the smart television as claimed in claim 5, wherein after the taking a picture of the user image currently shown in the screen according to the second voice information and/or the first gesture information, the method further comprises:
and saving the shot pictures to the local and/or uploading the pictures to a server.
7. The human-computer interaction method of the intelligent television, as claimed in claim 6, further comprising, after the saving and/or uploading the taken picture to a server locally and/or locally:
acquiring third voice information and/or second gesture information input by a user;
and uploading the shot picture to a social platform according to the third voice information and/or the second gesture information.
8. A man-machine interaction device of an intelligent television is characterized by comprising:
the first information acquisition module is used for acquiring first voice information input by a user;
the camera activating module is used for activating a camera of the intelligent television according to the first voice information;
the image acquisition module is used for acquiring a user image in real time through the camera;
and the image display module is used for displaying the user image to a user through the screen of the intelligent television.
9. An intelligent television, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, the one or more programs cause the one or more processors to implement the human-computer interaction method of the smart television set as recited in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method for human-computer interaction of a smart tv as claimed in any one of claims 1-7.
CN202010878409.9A 2020-08-27 2020-08-27 Man-machine interaction method and device of smart television, smart television and storage medium Active CN112019931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010878409.9A CN112019931B (en) 2020-08-27 2020-08-27 Man-machine interaction method and device of smart television, smart television and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010878409.9A CN112019931B (en) 2020-08-27 2020-08-27 Man-machine interaction method and device of smart television, smart television and storage medium

Publications (2)

Publication Number Publication Date
CN112019931A true CN112019931A (en) 2020-12-01
CN112019931B CN112019931B (en) 2021-12-31

Family

ID=73502618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010878409.9A Active CN112019931B (en) 2020-08-27 2020-08-27 Man-machine interaction method and device of smart television, smart television and storage medium

Country Status (1)

Country Link
CN (1) CN112019931B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170171462A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Image Collection Method, Information Push Method and Electronic Device, and Mobile Phone
CN108090422A (en) * 2017-11-30 2018-05-29 深圳云天励飞技术有限公司 Hair style recommends method, Intelligent mirror and storage medium
CN110086996A (en) * 2019-05-17 2019-08-02 深圳创维-Rgb电子有限公司 A kind of automatic photographing method based on TV, TV and storage medium
CN111475717A (en) * 2020-03-27 2020-07-31 珠海格力电器股份有限公司 Dressing recommendation method and device, intelligent air conditioner and storage medium
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170171462A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Image Collection Method, Information Push Method and Electronic Device, and Mobile Phone
CN108090422A (en) * 2017-11-30 2018-05-29 深圳云天励飞技术有限公司 Hair style recommends method, Intelligent mirror and storage medium
CN110086996A (en) * 2019-05-17 2019-08-02 深圳创维-Rgb电子有限公司 A kind of automatic photographing method based on TV, TV and storage medium
CN111475717A (en) * 2020-03-27 2020-07-31 珠海格力电器股份有限公司 Dressing recommendation method and device, intelligent air conditioner and storage medium
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method

Also Published As

Publication number Publication date
CN112019931B (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN111510645B (en) Video processing method and device, computer readable medium and electronic equipment
US20160180593A1 (en) Wearable device-based augmented reality method and system
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
EP3136793A1 (en) Method and apparatus for awakening electronic device
CN109636712B (en) Image style migration and data storage method and device and electronic equipment
CN108876732A (en) Face U.S. face method and device
US20230008199A1 (en) Remote Assistance Method and System, and Electronic Device
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112396679B (en) Virtual object display method and device, electronic equipment and medium
US20180227506A1 (en) Method for providing image, electronic device, and storage medium
CN113965694B (en) Video recording method, electronic device and computer readable storage medium
CN112527174B (en) Information processing method and electronic equipment
CN112527222A (en) Information processing method and electronic equipment
CN113099146B (en) Video generation method and device and related equipment
CN111078170B (en) Display control method, display control device, and computer-readable storage medium
CN107657590A (en) Image processing method and device
CN105897862A (en) Method and apparatus for controlling intelligent device
WO2020238454A1 (en) Photographing method and terminal
WO2022156703A1 (en) Image display method and apparatus, and electronic device
WO2024067468A1 (en) Interaction control method and apparatus based on image recognition, and device
CN112019931B (en) Man-machine interaction method and device of smart television, smart television and storage medium
CN111225151B (en) Intelligent terminal, shooting control method and computer readable storage medium
CN112449098B (en) Shooting method, device, terminal and storage medium
CN107527334A (en) Human face light moving method and device
CN117409119A (en) Image display method and device based on virtual image and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant