CN111291219A - Method for changing interface background color and display equipment - Google Patents

Method for changing interface background color and display equipment Download PDF

Info

Publication number
CN111291219A
CN111291219A CN202010069609.XA CN202010069609A CN111291219A CN 111291219 A CN111291219 A CN 111291219A CN 202010069609 A CN202010069609 A CN 202010069609A CN 111291219 A CN111291219 A CN 111291219A
Authority
CN
China
Prior art keywords
video
color
interface
recording
thumbnail file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010069609.XA
Other languages
Chinese (zh)
Inventor
于文钦
陈俊宁
丁佳一
宁静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202010069609.XA priority Critical patent/CN111291219A/en
Publication of CN111291219A publication Critical patent/CN111291219A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/785Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application discloses a method and display equipment for changing interface background color, wherein a user generates a selection instruction when triggering a designated video in a selection interface, and obtains a video thumbnail file of the designated video according to the selection instruction; calling a color palette, extracting a color value of each color in the video thumbnail file, comparing the color proportion of each color, and determining the main color of the video thumbnail file; and changing the background color of the selection interface based on the main color and displaying. Therefore, the method provided by the embodiment of the invention can change the background color of the selection interface into the main color of the video thumbnail file corresponding to the selected specified video when the user selects the specified video based on the selection interface, and changes along with the user clicking different specified videos, so that the background color of the selection interface is diversified, and richer visual effects are provided for the user.

Description

Method for changing interface background color and display equipment
Technical Field
The present application relates to the field of application level processing technologies, and in particular, to a method for changing an interface background color and a display device.
Background
With the continuous development of communication technology, terminal devices such as computers, smart phones and display devices have become more and more popular. For the functions of the display device, as the user demands for the application experience is higher and higher, different types of application programs, such as "magic show," may be installed in the display device, so that the display device may record videos in addition to the television watching function of the display device.
In order to record videos by using a display device, an interface for selecting videos is generally displayed in the display device, and a user selects a specific video based on the interface and then records the videos. However, the background color of the interface displayed in the current display device is usually unchanged, for example, the background color is not changed after being set as the default color of the system, which results in a single interface color, which cannot ensure the flexibility and richness of the interface, and a poor visual effect.
Disclosure of Invention
The application provides a method for changing interface background color and display equipment, which are used for solving the problems of single display color and poor visual effect of the existing interface.
In a first aspect, the present application provides a method for changing a background color of an interface, comprising the steps of:
responding to a selection instruction generated when a user triggers a designated video in a selection interface, and acquiring a video thumbnail file of the designated video;
calling a palette class, and extracting a color value of each color in the video thumbnail file;
comparing the color proportion of each color based on the color value of each color in the video thumbnail file, and determining the main color of the video thumbnail file;
and changing the background color of the selection interface and displaying the background color based on the main color.
Further, the calling a palette class to extract a color value of each color in the video thumbnail file includes:
calling a palette class and extracting the pixel value of the video thumbnail file;
clustering the pixel values with the same color characteristics to obtain different types of colors;
a color value for each type of color is calculated, the color value comprising a number of pixel values.
Further, the determining the dominant color of the video thumbnail file by comparing the color ratios of each color based on the color value of each color in the video thumbnail file includes:
calculating the ratio of each color according to the color value of each color in the video thumbnail file;
comparing the ratio of each color to determine the color corresponding to the largest ratio;
and taking the color corresponding to the maximum ratio value as the main color of the video thumbnail file.
Further, the determining the dominant color of the video thumbnail file by comparing the color ratios of each color based on the color value of each color in the video thumbnail file includes:
extracting a red color value, a green color value and a blue color value of each color value based on the color value of each color in the video thumbnail file;
calculating a variance between the red, green and blue color values for each color;
and comparing the variances corresponding to each color, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
Further, the changing and displaying the background color of the selection interface based on the main color comprises:
acquiring a background picture of the selection interface;
and setting the color of the background image as a main color and displaying the main color.
In a second aspect, the present application provides a display device comprising:
a display configured to display a selection interface;
a processor in communication with the display, the processor configured to: responding to a selection instruction generated when a user triggers a designated video in a selection interface, and acquiring a video thumbnail file of the designated video;
calling a palette class, and extracting a color value of each color in the video thumbnail file;
comparing the color proportion of each color based on the color value of each color in the video thumbnail file, and determining the main color of the video thumbnail file;
and changing the background color of the selection interface and displaying the background color based on the main color.
Further, the processor is further configured to:
calling a palette class and extracting the pixel value of the video thumbnail file;
clustering the pixel values with the same color characteristics to obtain different types of colors;
a color value for each type of color is calculated, the color value comprising a number of pixel values.
Further, the processor is further configured to:
calculating the ratio of each color according to the color value of each color in the video thumbnail file;
comparing the ratio of each color to determine the color corresponding to the largest ratio;
and taking the color corresponding to the maximum ratio value as the main color of the video thumbnail file.
Further, the processor is further configured to:
extracting a red color value, a green color value and a blue color value of each color value based on the color value of each color in the video thumbnail file;
calculating a variance between the red, green and blue color values for each color;
and comparing the variances corresponding to each color, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
Further, the processor is further configured to:
acquiring a background picture of the selection interface;
and setting the color of the background image as a main color and displaying the main color.
In a third aspect, the present application further provides a computer storage medium, which may store a program that, when executed, may implement some or all of the steps in the embodiments of the method for changing the color of the interface background provided in the present application.
According to the technical scheme, the method and the display device for changing the background color of the interface provided by the embodiment of the invention have the advantages that the selection instruction is generated when the user triggers and selects the specified video in the interface, and the video thumbnail file of the specified video is obtained according to the selection instruction; calling a color palette, extracting a color value of each color in the video thumbnail file, comparing the color proportion of each color, and determining the main color of the video thumbnail file; and changing the background color of the selection interface based on the main color and displaying. Therefore, the method provided by the embodiment of the invention can change the background color of the selection interface into the main color of the video thumbnail file corresponding to the selected specified video when the user selects the specified video based on the selection interface, and changes along with the user clicking different specified videos, so that the background color of the selection interface is diversified, and richer visual effects are provided for the user.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment;
fig. 2 is a block diagram exemplarily showing a hardware configuration of a control apparatus according to the embodiment;
fig. 3 is a block diagram exemplarily showing a hardware configuration of a hardware system in the display device according to the embodiment;
fig. 4 is a block diagram illustrating a hardware architecture of the display device according to fig. 3;
fig. 5 is a diagram exemplarily showing a functional configuration of a display device according to the embodiment;
fig. 6a schematically shows a configuration of a software system in a display device according to an embodiment;
fig. 6b schematically shows a configuration of an application in a display device according to an embodiment;
FIG. 7 is a diagram illustrating a user interface in a display device according to an embodiment;
FIG. 8 is a diagram illustrating a preview interface in accordance with an embodiment;
fig. 9 is a flow diagram illustrating a method of launching a video recording application according to an embodiment;
fig. 10 illustrates a flow diagram of a method of initiating a video recording interface according to an embodiment;
FIG. 11 is a diagram that illustrates a prompt interface, according to an embodiment;
FIG. 12 is a flow diagram illustrating a method of sending a video online preview request according to an embodiment;
FIG. 13 is a diagram illustrating a selection interface according to an embodiment;
fig. 14 is a diagram illustrating an original thumbnail according to an embodiment;
fig. 15 is a flow chart illustrating a video recording method according to an embodiment;
FIG. 16 is a schematic diagram illustrating a distribution of presentation selection interfaces according to an embodiment;
FIG. 17 is a flow chart illustrating a method of changing the color of the interface background according to an embodiment;
FIG. 18 is a diagram illustrating the generation of a download interface according to the present embodiment;
fig. 19 is a schematic diagram illustrating generation of a ready-to-record interface according to the present embodiment;
fig. 20 is a flowchart illustrating a method of generating a ready-to-record interface according to the present embodiment;
fig. 21 is a schematic diagram illustrating generation of a recording interface according to the present embodiment;
fig. 22 is a flowchart illustrating a method of generating a recording interface according to the present embodiment;
FIG. 23 is a flow chart illustrating a method of generating a preview in accordance with the present embodiment;
FIG. 24 is a flow chart illustrating a method for dynamic display of lyrics according to the present embodiment;
fig. 25 is a flowchart illustrating a method of video composition according to the present embodiment;
fig. 26 is a flowchart illustrating a method of composing a specified video file according to the present embodiment;
fig. 27 is a diagram exemplarily showing a data flow for synthesizing a specified video file according to the present embodiment;
fig. 28 is a diagram exemplarily showing a recording completion interface according to the present embodiment;
fig. 29 is another flowchart illustrating a video recording method according to the present embodiment;
FIG. 30 is a flow chart illustrating a method of receiving a video online preview request in accordance with the present embodiment;
fig. 31 is a still another flowchart illustrating a video recording method according to the present embodiment;
fig. 32 is a block diagram illustrating an interaction structure of a display device and a server according to the present embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
For the convenience of users, various external device interfaces are usually provided on the display device to facilitate connection of different peripheral devices or cables to implement corresponding functions. When a high-definition camera is connected to an interface of the display device, if a hardware system of the display device does not have a hardware interface of a high-pixel camera receiving the source code, data received by the camera cannot be displayed on a display screen of the display device.
Furthermore, due to the hardware structure, the hardware system of the conventional display device only supports one path of hard decoding resources, and usually only supports video decoding with a resolution of 4K at most, so when a user wants to perform video chat while watching a network television, the user needs to use the hard decoding resources (usually GPU in the hardware system) to decode the network video without reducing the definition of the network video screen, and in this case, the user can only process the video chat screen by using a general-purpose processor (e.g. CPU) in the hardware system to perform soft decoding on the video.
The soft decoding is adopted to process the video chat picture, so that the data processing burden of a CPU (central processing unit) can be greatly increased, and when the data processing burden of the CPU is too heavy, the problem of picture blocking or unsmooth flow can occur. Further, due to the data processing capability of the CPU, when the CPU performs soft decoding on the video chat screen, multi-channel video calls cannot be generally implemented, and when a user wants to perform video chat with multiple other users in the same chat scene, access is blocked.
In view of the above aspects, to overcome the above drawbacks, the present application discloses a dual hardware system architecture to implement multiple channels of video chat data (at least one channel of local video).
The concept to which the present application relates will be first explained below with reference to the drawings. It should be noted that the following descriptions of the concepts are only for the purpose of facilitating understanding of the contents of the present application, and do not represent limitations on the scope of the present application.
The term "module," as used in various embodiments of the present application, may refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The term "remote control" as used in the embodiments of the present application refers to a component of an electronic device (such as the display device disclosed in the present application) that is capable of wirelessly controlling the electronic device, typically over a short distance. The component may typically be connected to the electronic device using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB, bluetooth, motion sensors, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in the common remote control device with the user interface in the touch screen.
The term "gesture" as used in the embodiments of the present application refers to a user behavior used to express an intended idea, action, purpose, or result through a change in hand shape or an action such as hand movement.
The term "hardware system" used in the embodiments of the present application may refer to a physical component having computing, controlling, storing, inputting and outputting functions, which is formed by a mechanical, optical, electrical and magnetic device such as an Integrated Circuit (IC), a Printed Circuit Board (PCB) and the like. In various embodiments of the present application, a hardware system may also be referred to as a motherboard (or chip).
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display apparatus 200 through the control device 100.
The control device 100 may be a remote controller 100A, which can communicate with the display device 200 through an infrared protocol communication, a bluetooth protocol communication, a ZigBee (ZigBee) protocol communication, or other short-range communication, and is used to control the display device 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control apparatus 100 may also be a smart device, such as a mobile terminal 100B, a tablet computer, a notebook computer, etc., which may communicate with the display device 200 through a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), or other networks, and implement control of the display device 200 through an application program corresponding to the display device 200.
For example, the mobile terminal 100B and the display device 200 may each have a software application installed thereon, so that connection communication between the two can be realized through a network communication protocol, and the purpose of one-to-one control operation and data communication can be further realized. Such as: a control instruction protocol can be established between the mobile terminal 100B and the display device 200, a remote control keyboard is synchronized to the mobile terminal 100B, and the function of controlling the display device 200 is realized by controlling a user interface on the mobile terminal 100B; the audio and video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
As shown in fig. 1, the display apparatus 200 may also perform data communication with the server 300 through various communication means. In various embodiments of the present application, the display device 200 may be allowed to be communicatively coupled to the server 300 via a local area network, a wireless local area network, or other network. The server 300 may provide various contents and interactions to the display apparatus 200.
Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, and Electronic Program Guide (EPG) interactions. The servers 300 may be a group or groups, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
The display device 200, in one aspect, may be a liquid crystal display, an oled (organic Light emitting diode) display, a projection display device; on the other hand, the display device can be a display system consisting of an intelligent television or a display and a set-top box. The specific display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network tv function that provides a computer support function in addition to the broadcast receiving tv function. Examples include a web tv, a smart tv, an Internet Protocol Tv (IPTV), and the like. In some embodiments, the display device may not have a broadcast receiving television function.
As shown in fig. 1, a camera may be connected or disposed on the display device, and is used to present a picture taken by the camera on a display interface of the display device or other display devices, so as to implement interactive chat between users. Specifically, the picture shot by the camera can be displayed on the display device in a full screen mode, a half screen mode or any optional area.
As an optional connection mode, the camera is connected with the display rear shell through the connecting plate, is fixedly installed in the middle of the upper side of the display rear shell, and can be fixedly installed at any position of the display rear shell as an installable mode, so that an image acquisition area is ensured not to be shielded by the rear shell, for example, the display orientation of the image acquisition area is the same as that of the display equipment.
As another alternative connection mode, the camera is connected to the display rear shell through a connection board or other conceivable connector, the camera is capable of lifting, the connector is provided with a lifting motor, when a user wants to use the camera or an application program wants to use the camera, the camera is lifted out of the display, and when the camera is not needed, the camera can be embedded in the rear shell to protect the camera from being damaged.
As an embodiment, the camera adopted in the present application may have 1600 ten thousand pixels, so as to achieve the purpose of ultra high definition display. In actual use, cameras higher or lower than 1600 ten thousand pixels may also be used.
After the camera is installed on the display device, the contents displayed by different application scenes of the display device can be fused in various different modes, so that the function which cannot be realized by the traditional display device is achieved.
Illustratively, a user may conduct a video chat with at least one other user while watching a video program. The presentation of the video program may be as a background frame over which a window for video chat is displayed. The function is called 'chat while watching'.
Optionally, in a scene of "chat while watching", at least one video chat is performed across terminals while watching a live video or a network video.
In another example, a user can conduct a video chat with at least one other user while entering the educational application for learning. For example, a student may interact remotely with a teacher while learning content in an educational application. Vividly, this function can be called "chatting while learning".
In another example, a user conducts a video chat with a player entering a card game while playing the game. For example, a player may enable remote interaction with other players when entering a gaming application to participate in a game. Figuratively, this function may be referred to as "watch while playing".
Optionally, the game scene is fused with the video picture, the portrait in the video picture is scratched and displayed in the game picture, and the user experience is improved.
Optionally, in the motion sensing game (such as ball hitting, boxing, running and dancing), the human posture and motion, limb detection and tracking and human skeleton key point data detection are obtained through the camera, and then the human posture and motion, the limb detection and tracking and the human skeleton key point data detection are fused with the animation in the game, so that the game of scenes such as sports and dancing is realized.
In another example, a user may interact with at least one other user in a karaoke application in video and voice. Vividly, this function can be called "sing while watching". Preferably, when at least one user enters the application in a chat scenario, a plurality of users can jointly complete recording of a song.
In another example, a user may turn on a camera locally to take pictures and videos, figurative, which may be referred to as "looking into the mirror".
In other examples, more or less functionality may be added. The function of the display device is not particularly limited in the present application.
Fig. 2 is a block diagram schematically showing a hardware configuration of the control apparatus 100 according to the embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communicator 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control apparatus 100 is configured to control the display device 200, and to receive an input operation instruction from a user, and to convert the operation instruction into an instruction recognizable and responsive by the display device 200, and to mediate interaction between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications that control the display device 200 according to user demands.
In some embodiments, as shown in fig. 1, the mobile terminal 100B or other intelligent electronic device may function similar to the control apparatus 100 after installing an application for manipulating the display device 200. Such as: the user may implement the functions of controlling the physical keys of the apparatus 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 100B or other intelligent electronic devices.
The controller 110 includes a processor 112, a RAM113 and a ROM114, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display apparatus 200. The communicator 130 may include at least one of a WIFI module 131, a bluetooth module 132, an NFC module 133, and the like.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like. Such as: the user can realize a user instruction input function through actions such as voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display apparatus 200. In some embodiments, it may be an infrared interface or a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
In some embodiments, the control device 100 includes at least one of a communicator 130 and an output interface. The communicator 130 is configured in the control device 100, such as: the modules of WIFI, bluetooth, NFC, etc. may send the user input command to the display device 200 through the WIFI protocol, or the bluetooth protocol, or the NFC protocol code.
And a memory 190 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
And a power supply 180 for providing operational power support to the components of the control device 100 under the control of the controller 110. A battery and associated control circuitry.
A hardware configuration block diagram of a hardware system in the display device 200 according to the embodiment is exemplarily shown in fig. 3.
When a dual hardware system architecture is adopted, the mechanism relationship of the hardware system can be shown in fig. 3. For convenience of description, one hardware system in the dual hardware system architecture will be referred to as a first hardware system or a system, a-chip, and the other hardware system will be referred to as a second hardware system or N-system, N-chip. The chip A comprises a controller of the chip A and various interfaces, and the chip N comprises a controller of the chip N and various interfaces. The chip a and the chip N may each have a relatively independent operating system, and the operating system of the chip a and the operating system of the chip N may communicate with each other through a communication protocol, which is as follows: the frame layer of the operating system of the a-chip and the frame layer of the operating system of the N-chip can communicate to transmit commands and data, so that two independent subsystems, which are associated with each other, exist in the display device 200.
As shown in fig. 3, the a chip and the N chip may be connected, communicated and powered through a plurality of different types of interfaces. The interface type of the interface between the a chip and the N chip may include a General-purpose input/output interface (GPIO), a USB interface, an HDMI interface, a UART interface, and the like. One or more of these interfaces may be used for communication or power transfer between the a-chip and the N-chip. For example, as shown in fig. 3, in the dual hardware system architecture, the N chip may be powered by an external power source (power), and the a chip may not be powered by the external power source but by the N chip.
In addition to the interface for connecting with the N chip, the a chip may further include an interface for connecting other devices or components, such as an MIPI interface for connecting a Camera (Camera) shown in fig. 3, a bluetooth interface, and the like.
Similarly, in addition to the interface for connecting with the N chip, the N chip may further include an VBY interface for connecting with a display screen tcon (timer Control register), and an i2S interface for connecting with a power Amplifier (AMP) and a Speaker (Speaker); and an IR/Key interface, a USB interface, a Wifi interface, a bluetooth interface, an HDMI interface, a Tuner interface, and the like.
The dual hardware system architecture of the present application is further described below with reference to fig. 4. It should be noted that fig. 4 is only an exemplary illustration of the dual hardware system architecture of the present application, and does not represent a limitation of the present application. In actual practice, both hardware systems may contain more or less hardware or interfaces as desired.
A block diagram of the hardware architecture of the display device 200 according to fig. 3 is exemplarily shown in fig. 4. As shown in fig. 4, the hardware system of the display device 200 may include an a chip and an N chip, and a module connected to the a chip or the N chip through various interfaces.
The N-chip may include a tuner demodulator 220, a communicator 230, an external device interface 250, a processor 210, a memory 290, a chat input interface, a video processor 260-1, a display 280, an audio output interface 270, and a power supply. The N-chip may also include more or fewer modules in other embodiments.
The tuning demodulator 220 is configured to perform modulation and demodulation processing such as amplification, mixing, resonance and the like on a broadcast television signal received in a wired or wireless manner, so as to demodulate an audio/video signal carried in a frequency of a television channel selected by a user and additional information (e.g., an EPG data signal) from a plurality of wireless or wired broadcast television signals. Depending on the broadcast system of the television signal, the signal path of the tuner 220 may be various, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; according to different modulation types, the adjustment mode of the signal can be a digital modulation mode or an analog modulation mode; and depending on the type of television signal being received, tuner demodulator 220 may demodulate analog and/or digital signals.
The tuner demodulator 220 is also operative to respond to the user-selected television channel frequency and the television signals carried thereby, in accordance with the user selection and as controlled by the processor 210.
In other exemplary embodiments, the tuner/demodulator 220 may be in an external device, such as an external set-top box. In this way, the set-top box outputs television audio/video signals after modulation and demodulation, and the television audio/video signals are input into the display device 200 through the external device interface 250.
The communicator 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communicator 230 may include a WIFI module 231, a bluetooth communication protocol module 232, a wired ethernet communication protocol module 233, and other network communication protocol modules such as an infrared communication protocol module or a near field communication protocol module.
The display apparatus 200 may establish a connection of a control signal and a data signal with an external control apparatus or a content providing apparatus through the communicator 230. For example, the communicator may receive a control signal of the remote controller 100A according to the control of the controller.
The external device interface 250 is a component for providing data transmission between the N-chip processor 210 and the a-chip and other external devices. The external device interface may be connected with an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 250 may include: a High Definition Multimedia Interface (HDMI) terminal 251, a Composite Video Blanking Sync (CVBS) terminal, such as any one or more of an AV interface 252, an analog or digital component terminal 353, a Universal Serial Bus (USB) terminal 254, a Red Green Blue (RGB) terminal (not shown in the figure), and the like. The number and type of external device interfaces are not limited by this application.
The processor 210 controls the operation of the display device 200 and responds to user operations by running various software control programs (e.g., an operating system and/or various application programs) stored on the memory 290.
As shown in FIG. 4, the processor 210 includes a read only memory RAM214, a random access memory ROM213, a graphics processor 216, a communication interface 218, and a communication bus. The RAM214 and the ROM213, the graphic processor 216, and the communication interface 218 are connected via a bus.
Processor 210 for executing operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
A ROM213 for storing instructions for various system boots. For example, when the power-on signal is received, the display device 200 starts to power up, and the processor 210 executes the system boot instruction in the ROM, and copies the operating system stored in the memory 290 to the RAM214 to start running the boot operating system. After the start of the operating system is completed, the processor 210 copies the various applications in the memory 290 to the RAM214, and then starts running and starting the various applications.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
In some example embodiments, the processor 210 may include a plurality of processors. The plurality of processors may include a main processor and a plurality of or a sub-processor. A main processor for performing some operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. A plurality of or one sub-processor for performing an operation in a standby mode or the like.
The communication interfaces may include a first interface 218-1 through an nth interface 218-n. These interfaces may be network interfaces that are connected to external devices via a network.
The processor 210 may control the overall operation of the display device 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 280, the processor 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to an icon. The user command for selecting the UI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving and controlling the display apparatus 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
The basic module is a bottom layer software module for signal communication between hardware in the display device 200 and sending processing and control signals to an upper layer module. The detection module is a management module used for collecting various information from various sensors or user input interfaces, and performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and may be used to play information such as multimedia image content and UI interface. The communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing data communication between the browsing servers. The service module is a module for providing various services and various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
A chat input interface for transmitting an input signal of the user to the processor 210 or transmitting a signal output from the controller to the user. For example, the control device (e.g., a mobile terminal or a remote controller) may send an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by a user to the chat input interface, and then the input signal is transferred to the controller through the chat input interface; alternatively, the control device may receive an output signal such as audio, video or data output from the chat input interface via the controller, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, a user may enter a user command on a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 260-1 is configured to receive a video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 280.
Illustratively, the video processor 260-1 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, such as a 24Hz, 25Hz, 30Hz, or 60Hz video, into a 60Hz, 120Hz, or 240Hz frame rate, where the input frame rate may be related to a source video stream, and the output frame rate may be related to an update rate of a display. The input is realized in a common format by using a frame insertion mode.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output an RGB data signal.
And a display 280 for receiving the image signal input from the video processor 260-1 and displaying the video content and image and the menu manipulation interface. The display 280 includes a display component for presenting a picture and a driving component for driving the display of an image. The video content may be displayed from the video in the broadcast signal received by the tuner/demodulator 220, or from the video content input from the communicator or the external device interface. And a display 220 simultaneously displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, a driving component for driving the display according to the type of the display 280. Alternatively, in case the display 280 is a projection display, it may also comprise a projection device and a projection screen.
The audio processor 260-2 is configured to receive an audio signal, decompress and decode the audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification and other audio data processing to obtain an audio signal that can be played in the speaker 272.
The audio output interface 270, for outputting audio output under the control of the processor 210, may include a speaker 272, or an external sound output terminal 274 to output to a generating device of an external device, such as: external sound terminal or earphone output terminal.
In other exemplary embodiments, video processor 260-1 may comprise one or more chip components. The audio processor 260-2 may also include one or more chips.
And, in other exemplary embodiments, video processor 260-1, may be a separate chip or may be integrated with processor 210 in one or more chips.
And a power supply for supplying power supply support to the display device 200 from the power input from the external power source under the control of the processor 210. The power supply may include a built-in power supply circuit installed inside the display apparatus 200, or may be a power supply installed outside the display apparatus 200, such as a power supply interface for providing an external power supply in the display apparatus 200.
Similar to the N-chip, as shown in fig. 4, the a-chip may include a processor 310, a communicator 330, a detector 340, and a memory 390. In some embodiments, a chat input interface, a video processor, an audio processor, a display, an audio output interface may also be included. In some embodiments, there may also be a power supply that independently powers the A-chip.
The communicator 330 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communicator 330 may include a WIFI module 331, a bluetooth communication protocol module 332, a wired ethernet communication protocol module 333, and other network communication protocol modules such as an infrared communication protocol module or a near field communication protocol module.
The communicator 330 of the a-chip and the communicator 230 of the N-chip also interact with each other. For example, the N-chip WiFi module 231 is used to connect to an external network, generate network communication with an external server, and the like. The WiFi module 331 of the a chip is used to connect to the WiFi module 231 of the N chip without making a direct connection with an external network or the like. Therefore, for the user, a display device as in the above embodiment displays a WiFi account to the outside.
The detector 340 is a component of the display device a chip for collecting signals of an external environment or interacting with the outside. The detector 340 may include a light receiver 342, a sensor for collecting the intensity of ambient light, which may be used to adapt to display parameter changes, etc.; the system may further include an image collector 341, such as a camera, a video camera, etc., which may be configured to collect external environment scenes, collect attributes of the user or interact gestures with the user, adaptively change display parameters, and identify user gestures, so as to implement a function of interaction with the user.
An external device interface 350, which provides a component for data transmission between the processor 310 and the N-chip or other external devices. The external device interface may be connected with an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner.
A video processor 360-1 for processing the associated video signal.
An audio processor 360-2 for processing the associated audio signal.
The processor 310 controls the operation of the display device 200 and responds to the user's operations by running various software control programs stored on the memory 390 (e.g., using installed third party applications, etc.), and interacting with the N-chip.
As shown in FIG. 4, processor 310 includes read only memory ROM313, random access memory RAM314, graphics processor 316, processor 310, communication interface 318, and a communication bus. The ROM313 and the RAM314, the graphic processor 316, the processor 310, and the communication interface 318 are connected via a bus.
And a processor 310 for executing the operating system and application program instructions stored in the memory 390, communicating with the N-chip, transmitting and interacting signals, data, instructions, etc., and executing various application programs, data and contents according to various interaction instructions received from the outside, so as to finally display and play various audio and video contents.
A ROM313 for storing instructions for various system boots. The processor 310 executes system boot instructions in ROM and copies the operating system stored in memory 390 to RAM314 to begin running the boot operating system. After the booting of the operating system is completed, the processor 310 copies the various applications in the memory 390 to the RAM314, and then starts to run and boot the various applications.
The communication interface 318 is plural. These interfaces may be network interfaces connected to external devices via a network, or may be network interfaces connected to the N-chip via a network.
And an audio processor 360-2, configured to receive the audio signal, decompress and decode the audio signal according to a standard encoding and decoding protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing.
The processor 310 may control the overall operation of the display device 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 280, the processor 210 may perform an operation related to the object selected by the user command.
A graphics processor 316 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
Both the A-chip graphics processor 316 and the N-chip graphics processor 216 are capable of generating various graphics objects. In distinction, if application 1 is installed on the a-chip and application 2 is installed on the N-chip, the a-chip graphics processor 316 generates a graphics object when a user performs a command input by the user in application 1 at the interface of application 1. When a user makes a command input by the user in the interface of the application 2 and within the application 2, a graphic object is generated by the graphic processor 216 of the N chip.
Fig. 5 is a diagram exemplarily showing a functional configuration of a display device according to the embodiment.
As shown in fig. 5, the memory 390 of the a-chip and the memory 290 of the N-chip are used to store an operating system, an application program, contents, user data, and the like, respectively, and perform system operations for driving the display device 200 and various operations in response to a user under the control of the processor 310 of the a-chip and the processor 210 of the N-chip. The A-chip memory 390 and the N-chip memory 290 may include volatile and/or non-volatile memory.
The memory 290 is specifically configured to store an operating program for driving the processor 210 in the display device 200, and to store various applications built in the display device 200, various applications downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used to store system software such as an Operating System (OS) kernel, middleware, and applications, and to store input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the video processor 260-1 and the audio processor 260-2, the display 280, the communication interface 230, the tuner demodulator 220, the input/output interface, and the like.
In some embodiments, memory 290 may store software and/or programs, software programs for representing an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (e.g., the middleware, APIs, or applications), and the kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
The memory 290, for example, includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, a first audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. Processor 210 performs functions such as: the system comprises a broadcast television signal receiving and demodulating function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external instruction identification function, a communication control function, an optical signal receiving function, an electric power control function, a software control platform supporting various functions, a browser function and other various functions.
The memory 390 includes a memory storing various software modules for driving and controlling the display apparatus 200. Such as: various software modules stored in memory 390, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like. Since the functions of the memory 390 and the memory 290 are similar, reference may be made to the memory 290 for relevant points, and thus, detailed description thereof is omitted here.
Illustratively, the memory 390 includes an image control module 3904, a second audio control module 3906, an external instruction recognition module 3907, a communication control module 3908, a light receiving module 3909, an operating system 3911, and other application programs 3912, a browser module, and the like. Processor 210 performs functions such as: the system comprises an image control function, a display control function, an audio control function, an external instruction identification function, a communication control function, an optical signal receiving function, an electric power control function, a software control platform supporting various functions, a browser function and other various functions.
Differently, the external instruction recognition module 2907 of the N-chip and the external instruction recognition module 3907 of the a-chip can recognize different instructions.
Illustratively, since the image receiving device such as a camera is connected with the a-chip, the external instruction recognition module 3907 of the a-chip may include an image recognition module 3907-1, a graphic database is stored in the image recognition module 3907-1, and when the camera receives an external graphic instruction, the camera corresponds to the instruction in the graphic database to perform instruction control on the display device. Since the voice receiving device and the remote controller are connected to the N-chip, the external command recognition module 2907 of the N-chip may include a voice recognition module 2907-2, a voice database is stored in the voice recognition module 2907-2, and when the voice receiving device receives an external voice command or the like, the voice receiving device and the like perform a corresponding relationship with a command in the voice database to perform command control on the display device. Similarly, a control device 100 such as a remote controller is connected to the N-chip, and a key command recognition module performs command interaction with the control device 100.
A block diagram of the configuration of the software system in the display device 200 according to an embodiment is exemplarily shown in fig. 6 a.
For an N-chip, as shown in fig. 6a, the operating system 2911, which includes executing operating software for handling various basic system services and for performing hardware related tasks, serves as an intermediary between applications and hardware components for data processing.
In some embodiments, portions of the operating system kernel may contain a series of software to manage the display device hardware resources and provide services to other programs or software code.
In other embodiments, portions of the operating system kernel may include one or more device drivers, which may be a set of software code in the operating system that assists in operating or controlling the devices or hardware associated with the display device. The drivers may contain code that operates the video, audio, and/or other multimedia components. Examples include a display, a camera, Flash, WiFi, and audio drivers.
The accessibility module 2911-1 is configured to modify or access the application program to achieve accessibility and operability of the application program for displaying content.
A communication module 2911-2 for connection to other peripherals via associated communication interfaces and a communication network.
The user interface module 2911-3 is configured to provide an object for displaying a user interface, so that each application program can access the object, and user operability can be achieved.
Control applications 2911-4 for controlling process management, including runtime applications and the like.
The event transmission system 2914 may be implemented within the operating system 2911 or within the application 2912. In some embodiments, an aspect is implemented within the operating system 2911, while implemented in the application 2912, for listening for various user input events, and will implement one or more sets of predefined operations in response to various events referring to the recognition of various types of events or sub-events.
The event monitoring module 2914-1 is configured to monitor an event or a sub-event input by the user input interface.
The event identification module 2914-2 is used to input various event definitions for various user input interfaces, identify various events or sub-events, and transmit them to the process for executing one or more sets of their corresponding handlers.
The event or sub-event refers to an input detected by one or more sensors in the display device 200 and an input of an external control device (e.g., the control apparatus 100). Such as: the method comprises the following steps of inputting various sub-events through voice, inputting a gesture sub-event through gesture recognition, inputting a remote control key command of a control device and the like. Illustratively, the one or more sub-events in the remote control include a variety of forms including, but not limited to, one or a combination of key presses up/down/left/right/, ok keys, key presses, and the like. And non-physical key operations such as move, hold, release, etc.
The interface layout management module 2913, directly or indirectly receiving the input events or sub-events from the event transmission system 2914, monitors the input events or sub-events, and updates the layout of the user interface, including but not limited to the position of each control or sub-control in the interface, and the size, position, and level of the container, which are related to the layout of the interface.
Since the functions of the operating system 3911 of the a chip are similar to those of the operating system 2911 of the N chip, reference may be made to the operating system 2911 for relevant points, and details are not repeated here.
Fig. 6b schematically shows a configuration of an application in a display device according to an embodiment; as shown in fig. 6b, the application layer of the display device contains various applications that can be executed at the display device 200.
The N-chip application layer 2912 may include, but is not limited to, one or more applications such as: a video-on-demand application, an application center, a game application, and the like. The application layer 3912 of the a-chip may include, but is not limited to, one or more applications such as: live television applications, media center applications, and the like. It should be noted that what applications are respectively contained in the a chip and the N chip is determined according to an operating system and other designs, and the present invention does not need to make specific limitations and divisions on the applications contained in the a chip and the N chip.
The live television application program can provide live television through different signal sources. For example, a live television application may provide television signals using input from cable television, radio broadcasts, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
A video-on-demand application may provide video from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application program can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
The application program center can provide and store various application programs. The application may be a game, an application, or some other application associated with a computer system or other device that may be run on a display device. The application center may obtain these applications from different sources, store them in local storage, and then be operable on the display device 200.
A schematic diagram of a user interface in a display device 200 according to an embodiment is illustrated in fig. 7. As shown in fig. 7, the user interface includes a plurality of view display areas, illustratively, a first view display area 201 and a play screen 202, wherein the play screen includes a layout of one or more different items. And a selector in the user interface indicating that the item is selected, the position of the selector being movable by user input to change the selection of a different item.
It should be noted that the multiple view display areas may present display screens of different hierarchies. For example, a first view display area may present video chat project content and a second view display area may present application layer project content (e.g., web page video, VOD presentations, application screens, etc.).
Optionally, the different view display areas are presented with different priorities, and the display priorities of the view display areas are different among the view display areas with different priorities. If the priority of the system layer is higher than that of the application layer, when the user uses the acquisition selector and picture switching in the application layer, the picture display of the view display area of the system layer is not blocked; and when the size and the position of the view display area of the application layer are changed according to the selection of the user, the size and the position of the view display area of the system layer are not influenced.
The display frames of the same hierarchy can also be presented, at this time, the selector can switch between the first view display area and the second view display area, and when the size and the position of the first view display area are changed, the size and the position of the second view display area can be changed along with the change.
Since the a-chip and the N-chip may have independent operating systems installed therein, there are two independent but interrelated subsystems in the display device 200. For example, an Android (Android) and various APPs can be independently installed on the a chip (first chip) and the N chip (second chip), so that each chip can realize a certain function, for example, an application program for video recording can be installed in the display device, and the display device has a video recording function.
In order to provide richer personalized experience for a user during video recording, the embodiment of the invention provides a video recording method, which comprises two implementation processes, namely a video generation process and a video synthesis process. In the video generation process, a sticker filter effect based on a face recognition algorithm can be added on the basis of an original picture of a camera, and lyrics are dynamically displayed in the picture, so that when a user records a video, video materials corresponding to the corresponding video can be synchronously added in the recording process besides objects in a covered area shot by the camera, and the personalized requirements of the user are met; the user includes a photographer or a subject. In the video synthesis process, in the video generation process, the picture shot by the camera in real time, the paster filter effect and the dynamic lyrics are synchronously synthesized and displayed on a preview interface of the display equipment, and a video file is formed after the video is completely recorded, so that the finally obtained video has the MV effect.
The video recording method provided by the embodiment of the invention is applied to display equipment, wherein the display equipment comprises an intelligent television and the like, and the display equipment is in communication connection with a server. In order to ensure the local storage space of the display device, video materials and contents required for recording videos are usually stored in a server, and when the display device performs video recording, the display device sends a request to the server to obtain the video materials and contents of corresponding videos, and then performs video recording.
In order to realize video recording, a camera, an MIC (microphone) and a player are arranged in display equipment, the camera is used for realizing picture shooting, the player is used for realizing video content playing, and the MIC is used for collecting external sound; meanwhile, a corresponding video recording application program, such as a magic mirror application, is installed in the display device. The video recording application program is installed in an A chip processor of the display equipment, the A chip processor sends a request to the server through the N chip processor, and the server issues corresponding video materials to the A chip processor according to the request so as to meet the video recording requirement of 'magic mirror application' in the display equipment.
FIG. 8 is a diagram illustrating a preview interface in accordance with an embodiment; a flow diagram of a method of launching a video recording application according to an embodiment is illustrated in fig. 9. Referring to fig. 8 and fig. 9, in the video recording method provided in the embodiment of the present invention, an interaction flow between the display device and the server includes:
and S011, receiving an application starting instruction generated when the video recording application program is triggered.
And S012, responding to the application starting instruction, starting the camera, and generating a preview interface, wherein the preview interface is used for displaying the video recording interface and the picture shot by the camera.
The user opens the video recording application program installed in the display device through a remote controller or a voice mode, and generates an application opening instruction. The display equipment responds to an application starting instruction triggered by a user, and starts the camera while opening the application program with the video recording function.
In some embodiments, the camera is hidden behind the electronic display screen. The display equipment responds to a camera related application (such as a mirror application) starting instruction triggered by a user, opens an application program with a video recording function, simultaneously drives a motor to lift the camera, and starts the camera after the camera is completely lifted.
And after the video recording application program is opened, synchronously generating a preview interface. And displaying the real-time picture shot by the camera in the preview interface, and displaying a related interface required by video recording. For example, a video recording interface, a prop triggering interface, a normal shooting interface, other setting interfaces, and the like.
Referring to fig. 8, the video recording interface may be a "magic show" shown in fig. 8, for implementing video recording of adding video material and dynamic lyrics, the video material including filters, stickers, special effects, magic effects, etc.; the prop trigger interface may be a "magic prop" shown in fig. 8; the normal shooting interface may be "shoot/long press start recording" shown in fig. 8, the interface is an interface for normally shooting a picture or a video, and no video material is added to the shot picture or video, which is an original picture; other setting interfaces may be "timed shooting" for realizing shooting of a video for a predetermined length of time, "set focal length" for realizing digital zoom, adjusting the focal length of a camera, or adjusting the size of a video material, etc., as shown in fig. 8, and "set focal length".
When a user wants to record a video with a special effect or a dynamic lyric effect, the user clicks a video recording interface 'magic show' on a preview interface shown in fig. 8, starts a special effect video recording function, and enters a magic show video list interface. The magic show video list interface is a selection interface for a user to select to shoot videos, and the interface is displayed with a video list interface and a video preview interface.
A flow chart of a method of initiating a video recording interface according to an embodiment is illustrated in fig. 10. Referring to fig. 10, in particular, a method provided by the embodiment of the present invention starts a process of a video recording interface, including:
and S021, receiving a starting instruction generated when the video recording interface is triggered.
S022, responding to the starting instruction, and detecting whether the network state is connected.
S023, if the network states are communicated, generating a recommended video acquisition request.
And S024, sending the recommended video acquisition request to a server.
When a user triggers the video recording interface, a starting instruction is generated, namely, the special-effect video recording function is started. The display equipment responds to the starting instruction, and firstly detects whether the network is connected or not, namely detects whether the display equipment is in communication connection with the server or not, so that the display equipment can request corresponding video materials from the server when video recording is carried out.
If the network state of the display equipment is connected, the display equipment can perform a subsequent video recording process, a recommended video acquisition request is generated at the same time, and the display equipment sends the recommended video acquisition request to the server so as to acquire recommended video list information from the server.
A schematic diagram of a prompt interface according to an embodiment is illustrated in fig. 11. Referring to fig. 11, if the network status is not connected, a prompt interface is generated, and the prompt interface is used for displaying prompt information and selecting a button. And when the network state of the display equipment is not connected, which indicates that the video recording cannot be carried out at the moment, popping up a prompt. And displaying prompt information on a prompt interface, wherein if the network connection is abnormal, the user asks to set the network first. "and the like. The prompt interface also displays selection buttons, such as cancel and set network, and returns to the preview interface shown in fig. 8 when the user clicks cancel. And clicking the 'setting network', starting a setting interface to enter network setting. And after the network is connected, the display equipment sends the recommended video acquisition request to the server.
A flow diagram of a method of sending a video online preview request according to an embodiment is illustrated in fig. 12. According to the method provided by the embodiment of the invention, after the display equipment sends the recommended video acquisition request generated by triggering the video recording interface to the server, the interface information returned by the server is acquired so as to be displayed on the selection interface, and the user can select the designated video as the reference video during video recording. Specifically, referring to fig. 12, a process for a display device to send a video online preview request to a server to obtain interface information includes:
and S031, sending a recommended video acquisition request generated when the video recording interface is triggered to the server, wherein the recommended video acquisition request is used for requesting the server to issue recommended video list information. The recommended video list information includes at least one of a singer name, a video title, a video address, a lyric address, a video thumbnail address, and a video material corresponding to each video.
The user clicks a magic show of a video recording interface to generate a recommended video acquisition request, and the display equipment sends the recommended video acquisition request to the server so as to acquire recommended video list information which is required by the user for video recording. The recommended video list information includes information on a plurality of reference videos provided in the display apparatus, such as at least one of a singer name, a video title, a video address, a lyric address, a video thumbnail address, and a video material.
The reference video is used as a template video when a user records videos, and can pre-present video effects of files formed after the videos are recorded, such as lyric presentation, video material presentation, body motion presentation and the like. The user can select a favorite video in the selection interface as a reference video during video recording according to the pre-known video effect.
The singer name and the video title are used for distinguishing different reference videos, for example, when the reference video is the video of a cat book, "cat book" is the video title of the reference video, and "von ti mo" is the "singer name" of the reference video. The video address is used to provide an address for the display device to download the reference video. The lyric address is used for providing the display equipment with an address for downloading lyrics corresponding to the reference video. The video thumbnail address is used to provide the display apparatus with an address to acquire a video thumbnail file of a reference video. The video material is used for providing materials corresponding to the reference video, such as a filter, a sticker, a special effect, a magic effect and the like, for the display equipment when recording the video; the video materials can be distinguished by different video material names, and the video material names corresponding to different reference videos are different, namely the video materials are different.
S032, receiving the recommended video list information returned by the server according to the recommended video acquisition request.
And after receiving a recommended video acquisition request sent by the display equipment, the server issues recommended video list information of all reference videos included in the selection interface to the display equipment, and the display equipment displays the recommended video list information or acquires reference video contents during subsequent recording.
S033, generating a selection interface displaying the singer name and the video title.
A schematic diagram of a selection interface according to an embodiment is exemplarily shown in fig. 13. Referring to fig. 13, after receiving the recommended video list information, the display device displays the artist name and the video title in the information on the selection interface, and the artist name and the video title corresponding to the same reference video are correspondingly displayed in the same list item, such as "checat call-von sumo", "curry-milk coffee", "take you to travel-school" in fig. 13.
According to the method provided by the embodiment of the invention, when the user selects the reference video in the selection interface, the thumbnail of the reference video can be displayed in the selection interface for viewing conveniently, and when the content of the reference video is viewed, the lyrics are displayed to display the complete effect of the video. Therefore, after receiving the recommended video list information sent by the server, the display device acquires corresponding lyrics and a video thumbnail file from the server according to the lyric address and the video thumbnail address, wherein the video thumbnail file is used for displaying the content of the current reference video, is in a picture form, and can be a screenshot of a certain frame of the video content.
S034, when the user selects the specified video based on the selection interface, sending a video online preview request carrying a lyric address and a video thumbnail address to the server, wherein the video online preview request is used for requesting the server to issue lyrics corresponding to the lyric address and video thumbnail files corresponding to the video thumbnail address.
The display equipment receives the recommended video list information sent by the server, the recommended video list information is displayed on a selection interface, a user selects a specified video from each reference video displayed in the selection interface, and at the moment, the display equipment sends a video online preview request to the server for obtaining the relevant content of the video effect of any reference video to be displayed and performing online preview. At this time, the video online preview request sent by the display device carries the lyric address and the video thumbnail address, and the server issues the corresponding lyric and video thumbnail file according to the two addresses.
And S035, receiving the lyrics and the video thumbnail file returned by the server according to the video online preview request.
According to the number of reference videos provided in the display equipment and the lyric address of each reference video, the server issues a corresponding number of lyrics, wherein each lyric corresponds to one reference video; and according to the video thumbnail address of each reference video, the server issues a corresponding number of video thumbnail files, wherein each video thumbnail file corresponds to one reference video. After the display equipment receives each lyric and video thumbnail file returned by the server, the display equipment displays the video thumbnail file corresponding to the appointed video, and the video thumbnail file is displayed at one side of the corresponding singer name and video title; the lyrics are stored, so that the lyrics can be called and displayed conveniently during subsequent video selection or video recording.
Since the singer and the video title can be directly displayed after the display device acquires the recommended video list information, and since the server returns corresponding addresses for the lyrics and the video thumbnail files when the server returns the recommended video list information, the display device needs to acquire the lyrics and the video thumbnail files from the server again. Thus, the display device generates the original thumbnail for presentation when the server has not returned to display the corresponding lyrics and video thumbnail file.
A schematic diagram of an original thumbnail according to an embodiment is exemplarily shown in fig. 14. Referring to fig. 14, between the time when the user triggers the video online preview request and the server returns the corresponding lyrics or video thumbnail, the display device does not receive the lyrics and video thumbnail files returned by the server, and at this time, the original thumbnail is generated and displayed on the selection interface. The original thumbnails are provided by a display device, which may be a system picture, displayed on a selection interface, specifically on one side of the corresponding singer and video title, one for each reference video.
When the display device executes video recording according to the operation of a user, the display device acquires the relevant information of each reference video from the server and generates a video based on a specified video determined by the user.
A flow chart of a video recording method according to an embodiment is exemplarily shown in fig. 15. Referring to fig. 15, a video recording method provided in an embodiment of the present invention is applied to a display device, and the method includes the following steps:
and S11, sending a video online preview request to the server, wherein the video online preview request is used for requesting the server to issue interface information.
Based on the operation of a user, a special-effect video recording function is started, the display equipment sends a video online preview request to the server to acquire related contents required by video recording, and the interface information comprises at least one of a singer name, a video title, a video address, lyrics, a video thumbnail file and video materials. The lyrics and the video material can be acquired in a plurality of optional manners, for example, the lyrics and the video material can be acquired when the display device sends a video online preview request, or can be acquired when the display device sends a specified video content acquisition request in the subsequent video recording.
The process of sending the video online preview request provided in this embodiment may refer to the contents of steps S031 through S035 provided in the foregoing embodiment, and details are not described here again.
And S12, the receiving server displays the video content corresponding to the singer name, the video title, the lyrics, the video thumbnail file, the video material and the video address on a selection interface according to the interface information returned by the video online preview request.
After receiving the related information of each reference video returned by the server, the display device displays the singer name, the video title, the lyrics, the video thumbnail file and the video material corresponding to each reference video on the selection interface, and simultaneously displays the video content corresponding to the video address on the selection interface in an online playing mode, so that a user can select a specific video which the user wants to record from the multiple reference videos displayed on the selection interface.
A distribution diagram of a presentation selection interface according to an embodiment is exemplarily shown in fig. 16. Referring to fig. 16, in particular, in the process of generating a selection interface by the display device to display the video content corresponding to the singer name, the video title, the lyrics, the video thumbnail file, the video material and the video address on the selection interface, the display device includes: the selection interface includes a video list interface and a video preview interface. And displaying the singer name, the video title and the video thumbnail file corresponding to each video to be subjected to video recording on a video list interface. When any reference video in the video list interface is selected as the designated video, the video address corresponding to the designated video is sent to the player, the player is used for playing the designated video content on line according to the video address, the designated video content is displayed on the video preview interface, and the lyrics and the video materials are displayed on the video preview interface. And generating a ready-to-record interface below the video preview interface.
The display interface in the display equipment can simultaneously display a video list interface and a video preview interface, wherein the video list interface is an interface for displaying the singer name and the video title of each reference video, and the video preview interface is an interface for online playing the video content of any reference video. The video list interface and the video preview interface can be displayed on the selection interface according to the left-right position relationship, and can also be displayed on the selection interface according to the up-down position relationship.
And when the actual display is carried out, displaying the singer name and the video title corresponding to each video to be subjected to video recording in the video list interface. Based on the operation of the user, if the user selects one video in the video list interface as the designated video, in order to display the video effect of any reference video for the user, the method provided by the embodiment can provide a function of playing the video content of the designated video on line, that is, requesting the server for the video thumbnail file corresponding to the designated video to be displayed on the video preview interface for playing on line.
Since the occupied space of each reference video is large, if a large amount of video content is placed on the display device side, a large storage space is occupied. Therefore, when the display device displays the video effect of each reference video, the content played online is displayed, namely the corresponding reference video is played online and circularly according to the video address.
Specifically, when a user selects any reference video as a designated video based on a video list interface, the display device sends a video address corresponding to the designated video to the player, and the player plays designated video content on line according to the video address and displays the video content on a video preview interface. And in order to completely present the video effect of each reference video, the display equipment can synchronously display the lyrics and the video materials corresponding to the specified video in the video preview interface, superpose the video materials at the corresponding positions of the specified video content, and display the lyrics at the bottom position of the video preview interface.
That is, when a user selects a certain reference video, the display device sends the video address of the video to the player for online playing, simultaneously calls lyrics and video materials corresponding to the specified video downloaded from the server, and overlaps the lyrics, the specified video content and the video materials to be displayed on the video preview interface. The video preview interface can simultaneously present video content (songs), lyrics and special effects to present the video effect of the specified video.
For example, referring to fig. 13 again, if the designated video selected by the user is "learn cat call", at this time, the selection interface synchronously displays the video effect of "learn cat call", that is, the player plays the "learn cat call" video online according to the video address corresponding to the "learn cat call" video, and at the same time, the video displays the materials of the "learn cat call" such as cat ear, cat nose, cat whisker and cat claw. And calling the lyrics corresponding to the learning cat call video for displaying. The lyrics display may be displayed in a display manner commonly employed in MVs.
In order to enable a user to record according to a specified video after determining the video, a recording preparation interface can be generated on the selection interface and used for starting a video recording process. The interface for preparing recording can be positioned below the video preview interface and can also be determined according to the presentation mode of the actual selection interface.
And S13, receiving a selection information instruction triggered by the user based on the selection interface, wherein the selection information instruction is used for selecting the specified video in the selection interface.
When the user selects the reference video to be recorded, the user selects the reference video in a selection interface of the display device, specifically, selects the reference video in a video list interface of the selection interface. The video list interface displays a plurality of selectable reference videos, so that the user can conveniently select the reference videos. Each time a reference video is selected, a selection information instruction is correspondingly generated, and the display equipment can determine the designated video selected by the user according to the selection information instruction.
The designated video includes characters, songs, lyrics, and video material. In a given video, a character may swing its body to present different actions depending on the progress of the song. This action is an example action for the user to swing his body with reference. After the head or the hand of the person moves, the video material moves along with the movement, so that the video material is dynamically displayed around the head or the body of the person all the time. The playing progress of the song with the synchronous lyrics is displayed, so that the user can be prompted to sing along with the song in time.
The method provided by the embodiment of the invention can change the background color of the selection interface according to the color of the video thumbnail file corresponding to the selected reference video when the user selects the designated video based on the selection interface.
Fig. 17 is a flowchart illustrating a method of changing the color of the interface background according to an embodiment. Referring to fig. 17, a method for changing a background color of an interface according to an embodiment of the present invention includes the following steps:
s131, responding to a selection instruction generated when a user triggers the specified video in the selection interface, and acquiring a video thumbnail file of the specified video.
And selecting the appointed video in the selection interface by the user according to the video title and the singer name displayed in the selection interface. The selection interface includes a video listing interface for displaying a video title, a singer name, and a video thumbnail file.
When a user selects a designated video in a video list interface, a video thumbnail file is corresponding to the designated video, and the video thumbnail file is any frame image of the designated video. The image is usually provided with a plurality of colors, so that the image can be used as a reference for subsequently changing the background color of the selection interface.
S132, calling a palette class, and extracting the color value of each color in the video thumbnail file.
The video thumbnail file is in the form of a picture. When a user selects different videos in the video list interface, the selected different designated videos correspond to different video thumbnail files, namely different pictures are provided with different colors. In order to accurately change the background color of the selection interface, in this embodiment, corresponding color values need to be extracted from the video thumbnail file.
Specifically, in this embodiment, calling a palette class to extract a color value of each color in the video thumbnail file includes: calling a palette class and extracting the pixel value of the video thumbnail file; clustering the pixel values with the same color characteristics to obtain different types of colors; a color value is calculated for each type of color, the color value comprising a number of pixel values.
In an operating system of the display device, for example, a Palette class is configured in an Android system. The video thumbnail file is composed of a plurality of pixel points, and the pixel values corresponding to different pixel points are different. In this embodiment, the number of pixel values is several, and the number of pixel values is extracted by using the palette. Since the pixel values belonging to the same color are generally the same, the pixel values having the same color characteristic are clustered, and the video thumbnail files are divided into a plurality of types of colors according to the color composition. Finally, according to the number of the pixel values of each type of color, the color value of each type of color is calculated, wherein the color value can be composed of a plurality of pixel values.
And S133, comparing the color proportion of each color based on the color value of each color in the video thumbnail file, and determining the main color of the video thumbnail file.
In the method provided by this embodiment, the criterion for changing the background color of the selection interface is to determine which color contained in the video thumbnail file has the largest color value, and determine the color corresponding to the largest color value as the background color of the selection interface. Therefore, in order to accurately determine which color contained in the video thumbnail file has the largest color value, the present embodiment adopts the following two methods.
In one possible embodiment, the main color of the video thumbnail file is determined as follows: calculating the ratio of each color according to the color value of each color in the video thumbnail file; comparing the ratio of each color to determine the color with the largest ratio; and taking the color corresponding to the largest proportion value as the main color of the video thumbnail file.
The video thumbnail file includes a plurality of colors, and color values of each color may be the same or different, so that a ratio of each color may be determined first. The fraction value may be determined according to the ratio of the color value of a certain color to the sum of the color values of all colors. And after the ratio of each color is calculated, comparing according to the size of the ratio, wherein the color with the largest ratio is the color with the largest color value in the video thumbnail file, and therefore, the color corresponding to the largest ratio is used as the main color of the video thumbnail file.
In another possible embodiment, the dominant color of the video thumbnail file is determined as follows: extracting a red color value, a green color value and a blue color value of each color value based on the color value of each color in the video thumbnail file; calculating a variance between the red, green and blue color values for each color; and comparing the variances corresponding to each color, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
The plurality of colors included in the video thumbnail file can be represented by RGB color values, and color value components of each color in the RGB space are respectively extracted, that is, RGB color values of each color are extracted according to the color value of each color, including a red color value, a green color value, and a blue color value. And calculating the average value among the red color value, the green color value and the blue color value corresponding to the color value of each color, and calculating the variance among the red color value, the green color value and the blue color value of the color according to the average value. And comparing the variances of all the colors, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
In this embodiment, the variance calculation formula is as follows:
σ2=[(r-u)2+(g-u)2+(b-u)2]/3;
in the formula, σ2Is the variance, r is the red color value, g is the green color value, b is the blue color value, and u is the mean of r, g, and b.
And S134, changing the background color of the selection interface based on the main color and displaying.
After the main color of the video thumbnail file is determined, the background color of the selection interface can be changed. The method comprises the following steps: acquiring a background picture of a selection interface; the color of the background image is set to the main color and displayed. The color of the background image of the selection interface is changed into the main color, so that after a user selects a certain specified video in the video list interface, the background color of the selection interface is immediately changed into the main color of the video thumbnail file corresponding to the specified video, and richer visual effects are provided for the user.
The method for changing the background color of the selection interface comprises the steps of calling a palette type provided by display equipment, obtaining each color value in the video thumbnail file, comparing which color has the highest proportion in the video thumbnail file based on the color value of each color, and setting the background image of the selection interface as the color corresponding to the highest proportion.
For example, taking a "student cat's call" video as an example, where the colors in the video thumbnail file include pink, green, and blue, a palette class is called to obtain the color values of pink, green, and blue, respectively. According to any one of the two comparison methods provided in the foregoing embodiment, the color values of the three colors are compared, and if the ratio of the color value of pink is higher than the other two colors, the background of the selection interface is set to pink. And if the next video selected by the user is 'curry' and the colors in the video thumbnail file comprise blue, brown and black, calling the palette to respectively obtain the color values of the blue, brown and black. According to any one of the two comparison methods provided in the foregoing embodiment, the color values of the three colors are compared, and if the ratio of the color value of blue to the color values of the other two colors is higher, the background of the selection interface is set to be blue. Therefore, the background color of the selection interface may change from pink to blue during the user's selection process.
And S14, sending a specified video content acquisition request generated when the interface is triggered to be recorded to the server, wherein the specified video content acquisition request carries a video address corresponding to a specified video, and the specified video content acquisition request is used for requesting the server to issue specified video content corresponding to the specified video.
After the user selects the designated video which the user wants to record in the selection interface, the user clicks a recording preparation interface displayed below the selection interface, and at the moment, the display equipment generates a designated video content acquisition request and sends the designated video content acquisition request to the server. The specified video content acquisition request is used to acquire the specified video content.
Because the designated video content needs to use network resources for online playing, if the network is jammed, the designated video content cannot be played or stops playing in the middle. And because the shooting process of the camera does not need to use a network, the camera can be always in a shooting state, the video material and the lyrics are stored in the local of the display equipment, and the display and the change of the video material and the lyrics do not need to use the network. This will cause that the display device still continues recording after the designated video content stops playing due to network reasons, and the picture recorded by the camera does not have the sound of the designated video content, or the shot person is influenced to be unable to follow the designated video action in time, thereby influencing the video recording effect.
Therefore, in the method provided by this embodiment, to ensure smooth video recording, when the user selects the specified video to perform video recording, the display device sends the request to the server again, and the server issues the specified video content corresponding to the specified video according to the specified video content acquisition request, and the specified video content is stored locally in the display device by downloading, so that the specified video content is not affected by the network during playing.
Fig. 18 is a schematic diagram illustrating generation of a download interface according to the present embodiment. Referring to fig. 18, when the display device downloads the specified video content to the server according to the video address, the display device switches the selection interface to a download interface, and the download interface may be displayed on an upper layer of the selection interface. The download interface displays download prompts, such as "during download of material please later … …," displays download progress and cancel download buttons.
When the display device acquires the specified video content, the display device establishes a network link with a video address of the specified video, calls a file input stream reading method (inputstream.
If the user clicks the trigger preparation recording interface to prepare for video recording, if the specified video selected by the user is used before, the specified video content is stored on the display device side, at this time, the display device can directly call the stored specified video content without sending a request to the server.
The display device may start video recording after acquiring the specified video content from the server or calling the stored specified video content.
And S15, generating a recording preparation interface, and displaying the pictures shot by the camera and the appointed video content returned by the server according to the appointed video content acquisition request on the recording preparation interface.
After a video recording application program in the display equipment is started, the camera is always in an open state, and a picture of a user can be shot in real time, wherein the user is a shot person. And displaying the real-time picture shot by the camera, and displaying the acquired specified video content returned by the server as a reference when the user shoots the video.
After the display equipment finishes downloading the specified video content, the interface of the display equipment jumps to a recording preparation interface from a downloading interface, and the recording preparation interface is a waiting interface before recording is started and is used for providing waiting time for a user to start recording.
Fig. 19 is a schematic diagram illustrating generation of a ready-to-record interface according to the present embodiment; a flowchart of a method of generating a ready-to-record interface according to the present embodiment is illustrated in fig. 20. Referring to fig. 19 and fig. 20, in particular, the method provided by the embodiment of the present invention generates a process of preparing a recording interface, including:
s1501, obtaining a preview picture based on the picture shot by the camera.
S1502, the designated video content is displayed on the upper right of the preview screen.
S1503 generates a start recording button with a shaded area, and displays the start recording button with a shaded area at the bottom of the preview screen.
S1504, a ready-to-record interface is generated with the preview screen as the bottom layer, the designated video content as the middle layer, and the start-to-record button with the shaded area as the top layer.
The display equipment acquires the picture shot by the camera and displays the picture as a preview picture. And placing the obtained specified video content at the upper right part of the preview picture for displaying, wherein the specified video content is superposed with the preview picture and is positioned at the upper layer of the preview picture. And meanwhile, a recording starting button with a shaded area is generated at the bottom of the preview picture, the recording starting button is used for starting a recording starting function, and the shaded area is used for displaying the lyrics subsequently.
A recording preparation interface is formed by a preview picture, designated video content and a recording start button with a shaded area, wherein the preview picture is used as a bottom layer, the designated video content is used as a middle layer, and the recording start button with the shaded area is used as a top layer.
The display equipment presents a recording preparation interface, and video recording can be carried out at any time. And clicking a recording start button displayed in the recording preparation interface by a user to generate a recording start instruction, so that the display equipment can record and store the pictures, the lyrics and the video materials shot by the camera according to the recording start instruction.
And S16, receiving a recording starting instruction triggered by a user, displaying the picture shot by the camera, the designated video content, the designated lyrics and the designated video material on a recording interface, and starting video recording, wherein the designated lyrics refer to the lyrics corresponding to the designated video, and the designated video material refers to the video material corresponding to the designated video.
A user triggers a recording start button displayed in a recording preparation interface to generate a recording start instruction, and a display device starts video recording according to the recording start instruction, namely when the user clicks the recording start button, a camera shoots a real-time picture; the display equipment starts to play the appointed video content, a user sings or acts along with the appointed video content, when lyrics appear, the appointed lyrics are displayed at the bottom of the recording interface and are dynamically displayed along with the playing progress of the appointed video content; meanwhile, the video material changes correspondingly along with the action of the user. And finally, when video synthesis is carried out, synthesizing the picture shot by the camera, the specified lyrics and the specified video material into a video file.
Fig. 21 is a schematic diagram illustrating generation of a recording interface according to the present embodiment; fig. 22 is a flowchart illustrating a method for generating a recording interface according to the present embodiment. Referring to fig. 21 and 22, the process of starting video recording by the display device after receiving a recording start instruction, displaying a picture shot by the camera, designated video content, designated lyrics and designated video material on a recording interface, includes:
and S161, receiving a recording starting instruction generated when the recording starting button is triggered.
And S162, responding to the recording starting instruction, and acquiring the specified lyrics and the specified video material corresponding to the specified video.
S163, a preview screen including a screen shot by the camera and a designated video material is generated.
And the display equipment receives a recording starting instruction generated by clicking a recording starting button by a user and starts to record the video. And calling the specified video material and the specified lyrics corresponding to the specified video, and displaying the video material and the specified lyrics on a display interface of the display equipment. Meanwhile, the camera shoots a real-time picture for previewing, and the appointed video material and the picture shot by the camera are overlapped to obtain a preview picture. Wherein the specified video material is the same as the video material presented by the user in the video preview interface when selecting the reference video.
A flowchart of a method of generating a preview according to the present embodiment is illustrated in fig. 23. Referring to fig. 23, specifically, generating a preview screen including a screen shot by a camera and a designated video material includes:
s1631, obtaining the picture shot by the camera in real time, and displaying the picture in a full screen.
And S1632, based on the target position in the picture shot by the camera, overlaying the specified video material to the picture shot by the camera to obtain a preview picture.
The shot person is located the shooting range of camera, and display device carries out full-screen display with the real-time picture of camera shooting. The display equipment acquires the face coordinates of a shot person in an original picture shot by the camera, the shot person can be one person, two persons, three persons or more persons, and based on the body coordinates of each shot person, including the coordinates of the head, the hands, the body and the like, video materials are superposed on the head or other parts of the body of the shot person in the original picture shot by the camera.
The video material comprises facial beautification, stickers, filters and the like. Taking "learning cat call" as an example, the display device superimposes the sticker effect named "zhaocaimao" on the picture shot by the camera with the video material type of "special effect classification (sticker)", so that the "cat ear" sticker is located on the head of the shot in the picture, the "cat paw" sticker is located on the hand of the shot in the picture, the "cat necklace" sticker is located on the neck of the shot in the picture, and the "wealth cat" sticker is located on the body part of the shot in the picture.
The display equipment acquires the body coordinates of each photographed person in real time, and the designated video materials corresponding to the designated videos are overlaid to the pictures shot by the camera through a face recognition algorithm, so that the personalized requirements of the user are met.
And S164, starting playing the specified video content displayed on the upper right of the preview picture.
After the user clicks the start recording button, the display device designates the saved path of the designated video content to the player, calls an asynchronous preparation method (prepareAsync), initializes the player, and then calls a start method (start) to start playing the designated video content. The specified video content is displayed as a video image on the upper right side of the preview image, and the video image can be displayed on other positions of the preview image, so as to meet the precondition that the image of the photographed person in the preview image is not blocked.
And S165, switching the recording start button with the shadow area to a control button with the shadow area.
And S166, acquiring the total time length of the appointed video, and displaying the total time length on the control button in a countdown mode.
And after a recording starting button in the interface for preparing recording is triggered, switching the recording starting button to a control button. The control button is "enter key" in fig. 21, and the control button controls the end of the video recording.
If the user wants to record the video with the duration equal to the specified video content, or the display device wants to automatically end recording after the specified video content is completely played, according to the method provided by the embodiment, the display device can display the total duration of the specified video content on the control button in a countdown mode, and after the countdown is finished, the video recording process is ended.
Specifically, the display device acquires the total duration of the specified video content in seconds, displays it in the middle of the control buttons in digital form, and starts a countdown. During the countdown, the user clicks the control button again, and stops recording if the "ok" button is clicked. And if the countdown is finished, the recording is automatically stopped.
And S167, displaying the appointed lyric in a shadow area of the control button, wherein the shadow area is displayed at the bottom of the preview picture.
When the recording is started, the display device calls the appointed lyrics corresponding to the appointed video and displays the appointed lyrics in the shadow area of the control button. The shaded area is used for improving the definition of the display of the designated lyrics and facilitating the check of a user.
A flow chart of a method for dynamic display of lyrics according to the present embodiment is illustrated in fig. 24. Referring to fig. 24, the method provided in this embodiment displays the lyrics in a dynamic display manner during video recording, and for this purpose, the process of displaying the specified lyrics in the shadow area of the control button includes:
s1671, obtaining the lyric content, the occurrence time and the display time length of each character in each lyric in the appointed lyric.
And S1672, detecting the playing progress of the appointed video content after the video content starts playing.
And S1673, when the appearance time of each lyric is consistent with the time corresponding to the playing progress, displaying the target lyric content corresponding to the appearance time.
And S1674, redrawing the frame of the character according to the display time length of each character in the target lyric content, and realizing the dynamic display of the designated lyric progress in a shadow area.
In order to realize dynamic lyric display, the display equipment acquires a lyric file of the specified lyric, and the lyric file is in a txt format. After traversing the whole lyric txt file, all contents and time nodes to be displayed are obtained, namely the information obtained from the lyric file comprises the lyric content, the occurrence time of each lyric and the display duration of each character in each lyric. According to the playing progress of the appointed video content, when the time corresponding to the current playing progress is consistent with the appearance time of a certain sentence of lyrics, the sentence of lyrics is displayed, and the frame of the words is changed according to the display time length of each word in the sentence of lyrics, so that the dynamic display of the lyrics is realized.
Taking "study cat call" as an example, taking one lyric "we study cat call together" in the lyric file as an example, the appearance time of the lyric is [000:01.373], and the display time length of each character is [254,254,253,254,254,251,557 ].
The start of playing of the designated video content is recorded as the time origin, [000:01.373] is the time of occurrence of the first lyric, i.e. the first lyric is displayed after 1.373 seconds of playing of the designated video content, [254,254,253,254,254,251,557] is the display duration of each character of the lyric, e.g. "i" displays 254 milliseconds, and "i" displays 254 milliseconds thereafter.
Firstly, each lyric is divided into 3 sections for storage, wherein the sections are respectively the occurrence time, the display time of each character and the lyric content. After the appointed video content is started to play, the detection time is started, after the lyric starting time of the sentence is reached, the lyric file is displayed, then the file frame is redrawn according to the duration of each character, and the lyric progress display effect is achieved.
When the appearance time of a certain character in the lyric of the sentence is not reached, the singing or already singing character in the lyric of the sentence is displayed by a red frame, and the non-singing character is in a white font. For example, if the time corresponding to the current playing progress of the specified video content just corresponds to the 'me' word, redrawing the frame of the 'me' word, and switching the white color into red color; when the next word is 'people', the frame of the 'people' word is redrawn, the white color is switched to red, and at the moment, the red frame of the 'I' word is reserved, so that the sung words 'I' and 'people' are red frames, and the words 'studying cats together' without singing are white.
The appointed lyrics are displayed in a dynamic display mode, so that a photographed person can accurately sing, the phenomenon of wrong or missed photographing is avoided, the recorded mouth shape of the photographed person is not corresponding to background music, and the recording effect is influenced.
S168, taking the preview picture as a bottom layer, taking the appointed video content in the playing state as a first middle layer, taking a control button with a shaded area in the countdown state as a second middle layer and taking the appointed lyrics as a top layer, and generating a recording interface.
When the video recording is carried out, a recording interface is displayed in the display equipment, and the recording interface consists of four parts, namely a preview picture, a video picture (specified video content in a playing state), a control button and specified lyrics. In some embodiments, the four parts are recorded at the beginning of the recording, and the corresponding steps are performed simultaneously, i.e., steps S161 to S168 are started simultaneously, such as capturing a picture, superimposing video material, playing specified video content, displaying a countdown on a control button, and dynamically displaying specified lyrics in the picture. The four parts are started simultaneously, so that the action of a shot person, the display of video materials and the dynamic change of the appointed lyrics can be ensured to be synchronous with the appointed video content when the display equipment records the video, the video recording efficiency is ensured, and the individual requirements of users are met.
The display equipment synchronously displays a currently recorded picture in a preview picture during video recording, wherein the picture comprises a picture shot by a camera, a video material and specified lyrics, and when the recording is finished, the video file comprises the picture shot by the camera, the video material and the content of the specified lyrics.
A flowchart of a method of video compositing according to the present embodiment is illustrated in fig. 25. Referring to fig. 25, in order to implement the picture preview and obtain the specified video file, the video recording method provided in the embodiment of the present invention further includes:
and S17, receiving a recording ending instruction generated when the control button is triggered, wherein the state of the control button triggered comprises a triggered state when the countdown is ended and a triggered state when the control button is clicked when the countdown is not ended.
And S18, responding to the recording ending instruction, synthesizing the specified video material, the specified lyrics and the picture shot by the camera to obtain the specified video file.
And the user clicks a control button displayed in the recording interface, or the countdown displayed on the control button is finished, and then the current video recording process is finished. The conditions of the generation of the recording ending instruction generated by the control button comprise the generation when a user actively clicks the control button and the automatic generation when the countdown is ended.
And the display equipment finishes the recording process according to the recording finishing instruction, and simultaneously synthesizes the recorded specified video material, the specified lyrics and the picture shot by the camera to obtain the specified video file. For video synthesis, a renderer (OpenGLRender), a view previewer (glfaceview) and a media encoder (MediaCodec) are arranged in the display device, pictures, lyrics and video materials shot by a camera are synthesized into a same frame of picture by the renderer, the picture frame synthesized by the renderer is converted into byte data by the view previewer, a final image effect is displayed, and finally the byte data converted from the synthesized image are encoded by the media encoder to obtain a video file in an mp4 format.
Fig. 26 is a flowchart illustrating a method of composing a specified video file according to the present embodiment; fig. 27 is a diagram exemplarily showing a data flow for synthesizing a specified video file according to the present embodiment. Specifically, referring to fig. 26 and 27, the process of the display device synthesizing a specified video material, specified lyrics, and a picture taken by a camera to obtain a specified video file in response to the recording ending instruction includes:
and S181, responding to the recording ending instruction, calling a renderer, rendering and synthesizing the appointed video material and the appointed lyric corresponding to each frame of video picture and the picture shot by the camera to obtain a synthesized image.
In the process of executing video recording, the camera sends the shot pictures to the renderer along with the playing of the specified video content, the display equipment sends the video material and the specified lyrics corresponding to each moment or each frame of picture to the renderer, and the renderer renders and synthesizes the specified video material, the specified lyrics and the pictures shot by the camera to obtain a synthesized image. The composite image is a video picture taking a frame as a unit, namely, when the camera shoots one frame of picture, the display equipment sends the frame of picture, the corresponding video material and the lyrics to the renderer for rendering and synthesis, and the obtained picture frame is previewed in real time and displayed in the previewing picture.
The renderer is internally provided with an effect rendering helper (effect render helper), and the display device calls a texture processing (processTexture) method of the effect rendering helper (effect render helper) to synthesize the effect rendering helper and the lyrics to be displayed at the current time so as to draw the effect on the original picture of the camera and the lyrics to be displayed at the current time.
After the recording is finished, the display equipment calls the renderer to render and synthesize each frame of picture shot by the camera with the corresponding specified video material and the corresponding specified lyrics according to the recording finishing instruction, and a plurality of synthesized pictures can be obtained.
S182, calling a view previewer to convert the synthetic image into byte type data;
a view previewer (GLSurfaceView) has a frame renderer built therein, and executes a frame rendering (onDrawFrame) method. After the camera previewing is started, the original image shot by each frame of camera is called back in a frame drawing (onDrawFrame) method of a view previewer (GLSurfaceView).
The callback refers to a monitoring mechanism set in the display device and used for data monitoring. After the camera shoots a frame of picture, the monitoring mechanism monitors a new shot picture, and the new shot picture is obtained by the renderer and rendered and synthesized, so that real-time rendering and synthesis of each frame of picture shot by the camera are realized. Then, each composite image is converted into byte type data by a frame renderer, and an image is previewed and displayed on a preview screen.
The frame rendering method provides a cache output (dispatch output buffer) method of converting a Texture (Texture) of a synthesized image in a renderer (OpenGLRender) into byte type data, such as byte [ ] type data. The displayed picture of each frame corresponds to a group of byte [ ] information.
And S183, calling a media encoder to encode the byte type data to obtain the specified video file.
The media encoder may encode video and audio simultaneously, and thus, when encoding, the media encoder creates the video encoder and the audio encoder simultaneously, calls the create encoding type method, creates the video type "video/avc" and the audio type "audio/mp 4 a-lam", respectively.
And then appointing information such as frame rate, code rate, color format and the like of the video, and appointing information such as sampling rate, sound channel, sampling format and the like of the audio. And then calling a media mixer (MediaMuxer), creating a media mixing object, and specifying a file storage path and a file format for mixing and encoding the video and the audio into a video file.
When a video file is mixed, video and audio are two encoding processes, so two sub-threads need to be created to respectively perform encoding of the video and the audio. The source of the audio includes sound at the time of playback of the specified video content and sound collected by a MIC provided in the display apparatus.
When recording video, the media encoder registers the monitor first, and receives byte [ ] type data converted by the frame renderer. When video coding is carried out, after a media encoder receives byte [ ] data, the video encoder calls an input buffer (dequeuinputbuffer) method of the media encoder (MediaCodec) to apply for buffer, and the byte [ ] type data is transmitted to a buffer area of the application. After the encoding process is executed, a buffer output method (dequeuoutputbuffer) method of a media encoder (MediaCodec) is called to obtain encoded data.
And judging whether the data is legal or not according to the buffer data (bufferInfo) information of the encoded data. If legal, the write sample data (writesampleData) method of the media compositor (MediaMuxer) is called, and the encoded video data is written into the video track, otherwise the frame data is discarded.
Also, in audio coding, the same encoding process may be used as for video coding samples. However, since the audio data source is different from the video source, the audio data source may be from external sound received by the television MIC, or may be sound played by the television (sound of the specified video content being played). Therefore, when audio encoding is performed, the audio encoder needs to cyclically obtain samples to obtain original audio data, and the encoded original audio data is written into an audio track by calling a sample data writing (writesampedata) method of a media mixer (mediamixer).
And finally, after the video coding and the audio coding are respectively finished, carrying out mixed coding on the obtained video track and the audio track again to obtain a video file. Specifically, a media compositor (mediamixer) is called, the video track and the audio track obtained in the foregoing are taken as media compositor objects, and the video track and the audio track are mixedly encoded into a specified video file.
Fig. 28 is a diagram illustrating a recording completion interface according to the present embodiment. Referring to fig. 28, the display device performs encoding while performing video recording, and when the user clicks a control button (a confirmation key) on the recording interface or the video recording time is up, the recording flag is reset, the encoding cycle is skipped, the video recording and the file storage are completed, and the display device switches the recording interface to a recording completion interface.
In the recording completion interface, the specified video file which is just recorded is played in a circulating mode, the specified video file is uploaded to a server, the server generates a corresponding link address according to the specified video file, two-dimension code information is generated according to the link address and sent to display equipment, and the two-dimension code information is displayed on the recording completion interface through the display equipment. And the two-dimension code information is used for downloading the video to the mobile phone after the mobile phone of the user scans the code, or synchronizing the video to other equipment.
The recording completion interface can also display a sharing to friend circle interface, a saving to album interface, a re-recording interface, a singing whole head interface and the like. The sharing to friend circle interface is used for providing a function of sharing the specified video file to a friend circle for a user; the 'save to album' interface is used for providing a function for storing files to the local for a user; the re-recording interface is used for returning to the application main interface for re-recording; the re-recording interface is used for jumping to a K song interface for a user to enter a K song function.
The embodiment of the invention provides a video recording method which comprises a video generation process and a video synthesis process. In the two processes, when a video is recorded, a sticker filter effect based on a face recognition algorithm is added on the basis of an original picture of a camera, and lyrics are dynamically displayed in the picture, so that when a user records the video, besides things in a covered area shot by the camera, video materials corresponding to the corresponding video can be synchronously added in the recording process, and the personalized requirements of the user are met; and the picture shot by the camera in real time, the effect of the sticker filter and the dynamic lyrics can be synchronously synthesized to obtain a video file, so that the finally obtained video shows the MV effect.
Another flowchart of a video recording method according to the present embodiment is exemplarily shown in fig. 29. Referring to fig. 29, the video recording method provided by the embodiment of the present invention is applied to a server, and a display device is in communication connection with the server. In order to ensure the local storage space of the display device, video materials and contents required for recording videos are usually stored in a server, and when the display device performs video recording, the display device sends a request to the server to obtain the video materials and contents of corresponding videos, and then performs video recording. Therefore, the video recording method provided by the embodiment of the invention comprises the following steps:
and S21, receiving a video online preview request sent by the display equipment, wherein the video online preview request is used for acquiring interface information, and the interface information comprises at least one of singer name, video title, video address, lyric, video thumbnail file and video material.
And S22, responding to the video online preview request, and sending interface information to the display equipment, wherein the interface information is used for showing a selection interface of the display equipment.
S23, receiving a specified video content obtaining request which is sent by a display device and carries a video address corresponding to a specified video, wherein the specified video is a video selected by the display device in a selection interface, and the specified video content obtaining request is used for obtaining specified video content corresponding to the specified video.
And S24, responding to the appointed video content acquisition request, and sending the appointed video content corresponding to the video address to the display equipment, wherein the appointed video content is displayed on a recording interface when the display equipment performs video recording.
In the process of recording the video, the server is used for issuing corresponding information required for recording the video according to the request of the display device, and the specific implementation process may refer to the content provided in the foregoing embodiment, which is not described herein again.
A flow chart of a method of receiving a video online preview request according to the present embodiment is illustrated in fig. 30. Referring to fig. 30, in addition, the process executed by the server after receiving the video online preview request sent by the display device includes:
s211, receiving a recommended video obtaining request which is sent by a display device and generated when a video recording interface is triggered, wherein the recommended video obtaining request is used for obtaining recommended video list information, and the recommended video list information comprises at least one of a singer name, a video title, a video address, a lyric address, a video thumbnail address and a video material corresponding to each video.
And S212, sending the recommended video list information to the display equipment, wherein the singer name, the video title and the video material in the recommended video list information are used for the display equipment to display on a selection interface, and the video address is used for acquiring video content.
S213, receiving a video online preview request which is sent by the display device and carries a lyric address and a video thumbnail address, wherein the interface information acquisition instruction video online preview request is used for acquiring lyrics corresponding to the lyric address and video thumbnail files corresponding to the video thumbnail address.
S214, responding to the video online preview request, sending lyrics and a video thumbnail file to the display device, wherein the lyrics are used for the display device to display on a selection interface or a recording interface, and the video thumbnail file is used for the display device to display on the selection interface.
The server issues the recommended video list information to the display device according to the video online preview request sent by the display device, so that the recommended video list information is called when the display device displays or records a video.
As can be seen from the foregoing technical solutions, in the video recording method provided in the embodiments of the present invention, the sending, by the display device, a video online preview request to the server, and receiving interface information returned by the server includes: at least one of artist name, video title, video address, lyrics, video thumbnail file, and video material. And displaying the singer name, the video title, the lyrics, the video thumbnail file and the video material on a selection interface, and playing the video content corresponding to the video address on line on the selection interface. After the user selects the designated video in the selection interface, the display equipment sends a designated video content acquisition request carrying a video address corresponding to the designated video to the server, and receives the designated video content corresponding to the designated video returned by the server. The display equipment acquires the pictures shot by the camera, responds to the recording starting instruction, displays the pictures shot by the camera, the specified video content, the specified lyrics and the specified video material on a recording interface, and starts to record the video. Therefore, the method provided by the invention can synchronously add the video material corresponding to the video when the video is recorded, and add the lyrics for displaying, so that the user can record richer contents except the original picture shot by the camera, thereby meeting the personalized requirements of the user.
In order to reflect the change of a display interface when a user performs video recording by using a display device, an embodiment of the present invention provides a display device, and when performing video recording, a display in the display device may present different pictures according to an operation of the user based on the display device, so as to provide an operation interface for the user to present richer contents, and improve a visual effect of the user.
Fig. 31 is a still another flowchart illustrating a video recording method according to the present embodiment; fig. 32 is a block diagram illustrating an interaction structure of a display device and a server according to the present embodiment. Referring to fig. 31 and 32, the present application provides a display device including: a camera 2010 configured to capture a picture; a display 280 configured to display a selection interface, a ready-to-record interface, a record interface, or a record-complete interface; a communicator 2030 for performing data communication with a server; a processor 2020 in communication with the camera 2010 and the display 280, respectively, through a communicator 2030, the processor 2020 configured to perform the steps of:
and S31, generating a selection interface including a ready-to-record interface in response to a video online preview request generated when a user triggers the video recording interface, the selection interface including a recommended video profile list, video thumbnail files, lyrics, and video material, the recommended video profile list being associated with at least one of the video thumbnail files and the video material.
The interface information also comprises a video address, and the selection interface comprises a video list interface, a video preview interface and a recording preparation interface; the video list interface displays a recommended video introduction list, a plurality of recommended videos are listed in the recommended video introduction list and used for a user to select a designated video, the video preview interface is used for displaying preview contents of the designated video issued by the server based on a video address of the designated video selected by the user, and the recording preparation interface is used for switching the selection interface into a recording preparation interface.
After the display device generates a selection interface comprising a recording preparation interface and interface information, a user can select a reference video for video recording based on the selection interface, at the moment, the user selects any reference video in the video list interface according to personal preference, and a selection information instruction is generated every time one reference video is clicked. And the display equipment responds to a selection information instruction triggered by a user on the video list interface, and plays a specified video in the video preview interface based on the video address, wherein the selection information instruction is used for selecting the specified video in the video list interface.
S32, responding to a specified video content acquisition request generated when a user triggers a recording preparation interface on the selection interface, and generating a recording preparation interface; the interface for preparing recording displays video data corresponding to the specified video, a picture shot by the camera and an interface for executing a recording starting instruction, the specified video content obtaining request is used for obtaining the video data corresponding to the specified video from the server, and the selection interface is switched to the interface for preparing recording.
The video data here is the same as the specified video content mentioned in the foregoing embodiment. After a user operates the display equipment to enter a recording preparation interface, video recording can be carried out, namely, a recording start button displayed in the recording preparation interface is continuously clicked, and the specific display equipment executes the following steps:
s33, responding to a recording starting instruction generated when a user triggers a recording starting button on a recording preparation interface, generating a recording interface, wherein the recording starting instruction is used for calling pictures shot by a camera and switching the recording preparation interface into the recording interface; the start recording button refers to an interface for executing a start recording instruction.
S34, responding to a recording ending instruction triggered by a user on a recording interface, generating a recording ending interface, wherein the recording ending interface displays a video file shot by a camera shot by the user, and the video file shot by the camera is a file obtained by video recording according to pictures shot by the camera, appointed lyrics and appointed video materials; the recording interface displays at least one of video data corresponding to the specified video, pictures shot by the camera and recommended content related to the specified video, wherein the recommended content comprises video materials and lyrics.
When a user operates the display device to record videos, in the process from the start of a video recording application program to the end of the video recording, the interfaces switched in sequence comprise a preview interface, a prompt interface, a selection interface displaying an original thumbnail, a selection interface displaying a ready-to-record interface and interface information, a selection interface playing specified videos on line, a ready-to-record interface, a record finishing interface and the like. The generation of each interface and the corresponding method steps for implementing video recording may refer to the video recording methods shown in fig. 8 to fig. 29, and are not described herein again.
As can be seen from the foregoing technical solutions, the display device provided in the embodiments of the present invention includes a camera for shooting a picture, a display for displaying at least one of the shot picture, a selection interface, a preparation recording interface, a recording interface, and a recording completion interface, a communicator for performing data communication with a server, and a processor for communicating with the camera and the display through the communicator; the processor generates a selection interface comprising a ready-to-record interface in response to a video online preview request generated when a user triggers the video recording interface, the selection interface comprising a recommended video profile list, video thumbnail files and video materials, the recommended video profile list being associated with at least one of the video thumbnail files, lyrics and video materials; responding to a specified video content acquisition request generated when a user triggers a recording preparation interface on a selection interface, and generating a recording preparation interface; the interface for preparing recording displays video data corresponding to the specified video, a picture shot by the camera and an interface for executing a recording starting instruction, the specified video content obtaining request is used for obtaining the video data corresponding to the specified video from the server, and the selection interface is switched to the interface for preparing recording. The display equipment provided by the embodiment of the invention can synchronously add the video material corresponding to the video and add the lyrics for displaying when recording the video, so that a user can record richer contents except the original picture shot by the camera, and the display interface is switched in real time along with the operation of the user so as to present the rich contents to the user, thereby meeting the personalized requirements of the user.
Fig. 32 is a block diagram illustrating an interaction structure of a display device and a server according to the present embodiment. Referring to fig. 32, an embodiment of the present invention provides a display device, including: a camera 2010 configured to capture a picture; a display 280 configured to display a selection interface or a recording interface; a processor 2020 in communication with the camera and the display, respectively, the processor 2020 configured to send a video online preview request to a server, the video online preview request for requesting the server to issue interface information, the interface information comprising at least one of a singer name, a video title, a video address, lyrics, a video thumbnail file, and video material; receiving interface information returned by the server according to the video online preview request, and displaying the singer name, the video title, the lyrics, the video thumbnail file and the video material on a selection interface; receiving a selection information instruction triggered by a user based on the selection interface, wherein the selection information instruction is used for selecting a specified video in the selection interface; sending an appointed video content acquisition request generated when a ready-to-record interface is triggered to a server, wherein the appointed video content acquisition request carries a video address corresponding to an appointed video, and is used for enabling the server to issue the appointed video content corresponding to the appointed video to generate a ready-to-record interface, and presenting a picture shot by a camera and the appointed video content returned by the server according to the appointed video content acquisition request on the ready-to-record interface; receiving a recording starting instruction triggered by a user, displaying a picture shot by the camera, designated video content, designated lyrics and designated video materials on a recording interface, and starting video recording, wherein the designated lyrics refer to the lyrics corresponding to the designated video, and the designated video materials refer to the video materials corresponding to the designated video.
Further, the processor 2020 is further configured to: sending a recommended video acquisition request generated when a video recording interface is triggered to a server, wherein the recommended video acquisition request is used for requesting the server to issue recommended video list information, and the recommended video list information comprises at least one of a singer name, a video title, a video address, a lyric address, a video thumbnail address and a video material corresponding to each video; receiving recommended video list information returned by the server according to the recommended video acquisition request; generating a selection interface displaying the singer name and the video title; when a user selects a designated video based on the selection interface, sending a video online preview request carrying the lyric address and the video thumbnail address to a server, wherein the video online preview request is used for requesting the server to issue lyrics corresponding to the lyric address and video thumbnail files corresponding to the video thumbnail address; and receiving the lyrics and the video thumbnail file returned by the server according to the video online preview request.
Further, the processor 2020 is further configured to: receiving an application starting instruction generated when a video recording application program is triggered; responding to the application starting instruction, starting a camera, and generating a preview interface, wherein the preview interface is used for displaying the video recording interface and the picture shot by the camera.
Further, the processor 2020 is further configured to: receiving a starting instruction generated when a video recording interface is triggered; responding to the starting instruction, and detecting whether the network state is connected; if the network states are communicated, generating a recommended video acquisition request; and sending the recommended video acquisition request to a server.
Further, the processor 2020 is further configured to: and if the network states are not connected, generating a prompt interface, wherein the prompt interface is used for displaying prompt information and selecting buttons.
Further, the processor 2020 is further configured to: and when the lyrics and the video thumbnail file returned by the server are not received, generating an original thumbnail and displaying the original thumbnail on a selection interface.
Further, the processor 2020 is further configured to: the selection interface comprises a video list interface and a video preview interface; displaying the singer name, the video title and the video thumbnail file corresponding to each video to be subjected to video recording on a video list interface; when any video in the video list interface is selected as the designated video, sending a video address corresponding to the designated video to a player, wherein the player is used for playing designated video content on line according to the video address, the designated video content is displayed on a video preview interface, and lyrics and video materials are displayed on the video preview interface; and generating a ready-to-record interface below the video preview interface.
Further, the processor 2020 is further configured to: obtaining a preview picture based on the picture shot by the camera; displaying the specified video content at the upper right of the preview picture; generating a start recording button with a shadow area, and displaying the start recording button with the shadow area at the bottom of the preview picture; and generating a ready-to-record interface by taking the preview picture as a bottom layer, the appointed video content as a middle layer and the start-to-record button with the shadow area as a top layer.
Further, the processor 2020 is further configured to: receiving a recording starting instruction generated when a recording starting button is triggered; responding to the recording starting instruction, and acquiring specified lyrics and specified video materials corresponding to the specified video; generating a preview picture comprising a picture shot by the camera and a specified video material; starting to play the designated video content displayed on the upper right of the preview picture; switching the start recording button with the shadow area to a control button with the shadow area; acquiring the total duration of the specified video, and displaying the total duration on a control button in a countdown mode; displaying the specified lyrics in a shaded area of the control button, wherein the shaded area is displayed at the bottom of the preview picture; and taking the preview picture as a bottom layer, the specified video content in a playing state as a first middle layer, a control button with a shaded area in a countdown state as a second middle layer and the specified lyrics as a top layer to generate a recording interface.
Further, the processor 2020 is further configured to: acquiring a picture shot by a camera in real time, and displaying the picture in a full screen; and based on the target position in the picture shot by the camera, overlaying the specified video material to the picture shot by the camera to obtain a preview picture.
Further, the processor 2020 is further configured to: acquiring the lyric content and the occurrence time of each sentence of lyrics in the specified lyrics and the display time length of each character in each sentence of lyrics; detecting the playing progress of the appointed video content after the appointed video content starts playing; when the appearance time of each lyric is consistent with the time corresponding to the playing progress, displaying the target lyric content corresponding to the appearance time; and redrawing the frame of the characters according to the display time length of each character in the target lyric content, and realizing the dynamic display of the appointed lyric progress in a shadow area.
Further, the processor 2020 is further configured to: receiving a recording ending instruction generated when a control button is triggered, wherein the state of the control button comprises a triggering state when countdown is ended and a triggering state when the countdown is not ended and the control button is clicked; and responding to the recording ending instruction, and synthesizing the specified video material, the specified lyrics and the picture shot by the camera to obtain a specified video file.
Further, the processor 2020 is further configured to: responding to a recording ending instruction, calling a renderer, and rendering and synthesizing the appointed video material, the appointed lyrics and the picture shot by the camera corresponding to each frame of video picture to obtain a synthesized image; calling a view previewer to convert the composite image into byte type data; and calling a media encoder to encode the byte type data to obtain the specified video file.
Referring to fig. 32, an embodiment of the present invention provides a server 300, including: a processor 301, and a memory 302 for storing the processor-executable instructions; wherein the processor 301 is configured to: receiving a video online preview request sent by display equipment, wherein the video online preview request is used for acquiring interface information, and the interface information comprises at least one of a singer name, a video title, a video address, lyrics, a video thumbnail file and a video material; responding to the video online preview request, and sending the interface information to display equipment, wherein the interface information is used for displaying a selection interface of the display equipment; receiving a specified video content acquisition request which is sent by display equipment and carries a video address corresponding to a specified video, wherein the specified video is a video selected by the display equipment in a selection interface, and the specified video content acquisition request is used for acquiring specified video content corresponding to the specified video; and responding to the appointed video content acquisition request, and sending the appointed video content corresponding to the video address to display equipment, wherein the appointed video content is displayed on a recording interface when the display equipment performs video recording.
Further, the processor 301 is further configured to: receiving a recommended video acquisition request which is sent by the display equipment and generated when a video recording interface is triggered, wherein the recommended video acquisition request is used for acquiring recommended video list information, and the recommended video list information comprises at least one of a singer name, a video title, a video address, a lyric address, a video thumbnail address and a video material corresponding to each video; sending the recommended video list information to display equipment, wherein the singer name, the video title and the video material in the recommended video list information are used for the display equipment to display on a selection interface, and the video address is used for acquiring video content; receiving a video online preview request which is sent by a display device and carries the lyric address and a video thumbnail address, wherein the video online preview request is used for acquiring lyrics corresponding to the lyric address and video thumbnail files corresponding to the video thumbnail address; and responding to the video online preview request, and sending the lyrics and the video thumbnail file to a display device, wherein the lyrics are used for the display device to display in a selection interface or a recording interface, and the video thumbnail file is used for the display device to display in the selection interface.
Referring to fig. 32, the present application provides a display apparatus including: a display 280 configured to display a selection interface; a processor 2020 in communication with the display, the processor 2020 configured to: responding to a selection instruction generated when a user triggers a designated video in a selection interface, and acquiring a video thumbnail file of the designated video; calling a palette class, and extracting a color value of each color in the video thumbnail file; comparing the color proportion of each color based on the color value of each color in the video thumbnail file, and determining the main color of the video thumbnail file; and changing the background color of the selection interface and displaying the background color based on the main color.
Further, the processor 2020 is further configured to: calling a palette class and extracting the pixel value of the video thumbnail file; clustering the pixel values with the same color characteristics to obtain different types of colors; a color value for each type of color is calculated, the color value comprising a number of pixel values.
Further, the processor 2020 is further configured to: calculating the ratio of each color according to the color value of each color in the video thumbnail file; comparing the ratio of each color to determine the color corresponding to the largest ratio; and taking the color corresponding to the maximum ratio value as the main color of the video thumbnail file.
Further, the processor 2020 is further configured to: extracting a red color value, a green color value and a blue color value of each color value based on the color value of each color in the video thumbnail file; calculating a variance between the red, green and blue color values for each color; and comparing the variances corresponding to each color, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
Further, the processor 2020 is further configured to: acquiring a background picture of the selection interface; and setting the color of the background image as a main color and displaying the main color.
Referring to fig. 31, the present application provides a video recording method applied to a display device, the method including: generating a selection interface comprising a ready-to-record interface in response to a video online preview request generated when a user triggers a video recording interface, the selection interface comprising a recommended video profile list, video thumbnail files, and video materials, the recommended video profile list being associated with at least one of the video thumbnail files and the video materials; responding to a specified video content acquisition request generated when a user triggers the interface to be recorded on the selection interface, and generating an interface to be recorded; the interface for preparing recording displays video data corresponding to a specified video, a picture shot by a camera and an interface for executing a recording starting instruction, and the specified video content acquisition request is used for acquiring the video data corresponding to the specified video from a server and switching the selection interface into the interface for preparing recording.
Further, still include: responding to a recording starting instruction generated when a user triggers the recording starting button on the recording preparation interface, generating a recording interface, wherein the recording starting instruction is used for calling pictures shot by a camera and switching the recording preparation interface into the recording interface; the recording starting button is an interface for executing a recording starting instruction; responding to a recording starting instruction generated when a user triggers the recording starting button on the recording preparation interface, generating a recording interface, wherein the recording starting instruction is used for calling pictures shot by a camera and switching the recording preparation interface into the recording interface; the recording starting button is an interface for executing a recording starting instruction; the recording interface displays at least one of video data corresponding to a specified video, a picture shot by a camera and recommended content associated with the specified video, wherein the recommended content comprises video materials and lyrics.
Further, the selection interface comprises a video list interface, a video preview interface and a recording preparation interface; the video list interface displays a recommended video introduction list used for a user to select a designated video, the video preview interface is used for displaying the preview content of the designated video issued by the server based on the video address of the designated video selected by the user, and the recording preparation interface is used for switching the selection interface into a recording preparation interface.
Further, after generating the selection interface including the interface for preparing recording, the method further includes: responding to a selection information instruction triggered by a user on the video list interface, and playing a specified video in the video preview interface based on the video address, wherein the selection information instruction is used for selecting the specified video in the video list interface.
Further, before responding to the video online preview request generated when the user triggers the video recording interface, the method further comprises the following steps: and responding to an application starting instruction generated when a user triggers a video recording application program, starting the camera, and generating a preview interface, wherein the preview interface is used for displaying pictures shot by the video recording interface and the camera.
Further, before generating the selection interface including the interface for preparing recording and the interface information, the method further includes: and generating a selection interface displaying the original thumbnail in response to a video online preview request generated when a user triggers the video recording interface.
Further, still include: responding to a starting instruction generated by a user triggering a video recording interface, and detecting whether the network state is communicated; and when the network state is not connected, generating a prompt interface, wherein the prompt interface is used for displaying prompt information and selecting a button.
Further, the generating a ready-to-record interface in response to a request for acquiring specified video content, which is generated when a user triggers the ready-to-record interface, includes: calling a picture shot by the camera to obtain a preview picture in response to a specified video content acquisition request generated when a user triggers a recording preparation interface; displaying the specified video content on one side of the preview picture in a floating manner; and generating a recording preparation interface by taking the preview picture as a bottom layer and the specified video content as an upper layer.
Further, the generating a recording interface in response to a recording start instruction generated when the user triggers the recording start button includes: responding to a recording starting instruction generated when a user triggers a recording starting button, and acquiring specified lyrics and specified video materials corresponding to the specified video; generating a preview picture comprising a picture shot by the camera and a specified video material; starting to play the appointed video content displayed on one side of the preview picture; switching the recording start button to a control button; acquiring the total duration of the specified video, and displaying the total duration on a control button in a countdown mode; displaying the specified lyrics at the bottom of the preview picture; and taking the preview picture as a bottom layer, the specified video content in a playing state as a first middle layer, a control button presenting a countdown state as a second middle layer and the specified lyrics as a top layer to generate a recording interface.
Further, the generating a preview screen including a screen shot by a camera and a designated video material includes: acquiring a picture shot by a camera in real time, and displaying the picture in a full screen; and based on the target position in the picture shot by the camera, overlaying the specified video material to the picture shot by the camera to obtain a preview picture.
In specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments of the method for changing the color of the interface background provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiment of the display device, since it is substantially similar to the embodiment of the method for changing the background color of the interface, the description is simple, and the relevant points can be referred to the description in the embodiment of the method.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (10)

1. A method for changing the color of an interface background, comprising the steps of:
responding to a selection instruction generated when a user triggers a designated video in a selection interface, and acquiring a video thumbnail file of the designated video;
calling a palette class, and extracting a color value of each color in the video thumbnail file;
comparing the color proportion of each color based on the color value of each color in the video thumbnail file, and determining the main color of the video thumbnail file;
and changing the background color of the selection interface and displaying the background color based on the main color.
2. The method of claim 1, wherein the invoking a palette class to extract color values for each color in the video thumbnail file comprises:
calling a palette class and extracting the pixel value of the video thumbnail file;
clustering the pixel values with the same color characteristics to obtain different types of colors;
a color value for each type of color is calculated, the color value comprising a number of pixel values.
3. The method of claim 1, wherein determining the dominant color of the video thumbnail file based on a color value of each color in the video thumbnail file by comparing color fractions of each color comprises:
calculating the ratio of each color according to the color value of each color in the video thumbnail file;
comparing the ratio of each color to determine the color corresponding to the largest ratio;
and taking the color corresponding to the maximum ratio value as the main color of the video thumbnail file.
4. The method of claim 1, wherein determining the dominant color of the video thumbnail file based on a color value of each color in the video thumbnail file by comparing color fractions of each color comprises:
extracting a red color value, a green color value and a blue color value of each color value based on the color value of each color in the video thumbnail file;
calculating a variance between the red, green and blue color values for each color;
and comparing the variances corresponding to each color, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
5. The method of claim 1, wherein changing and displaying the background color of the selection interface based on the dominant color comprises:
acquiring a background picture of the selection interface;
and setting the color of the background image as a main color and displaying the main color.
6. A display device, comprising:
a display configured to display a selection interface;
a processor in communication with the display, the processor configured to: responding to a selection instruction generated when a user triggers a designated video in a selection interface, and acquiring a video thumbnail file of the designated video;
calling a palette class, and extracting a color value of each color in the video thumbnail file;
comparing the color proportion of each color based on the color value of each color in the video thumbnail file, and determining the main color of the video thumbnail file;
and changing the background color of the selection interface and displaying the background color based on the main color.
7. The display device of claim 6, wherein the processor is further configured to:
calling a palette class and extracting the pixel value of the video thumbnail file;
clustering the pixel values with the same color characteristics to obtain different types of colors;
a color value for each type of color is calculated, the color value comprising a number of pixel values.
8. The display device of claim 6, wherein the processor is further configured to:
calculating the ratio of each color according to the color value of each color in the video thumbnail file;
comparing the ratio of each color to determine the color corresponding to the largest ratio;
and taking the color corresponding to the maximum ratio value as the main color of the video thumbnail file.
9. The display device of claim 6, wherein the processor is further configured to:
extracting a red color value, a green color value and a blue color value of each color value based on the color value of each color in the video thumbnail file;
calculating a variance between the red, green and blue color values for each color;
and comparing the variances corresponding to each color, and taking the color corresponding to the maximum variance as the main color of the video thumbnail file.
10. The display device of claim 6, wherein the processor is further configured to:
acquiring a background picture of the selection interface;
and setting the color of the background image as a main color and displaying the main color.
CN202010069609.XA 2020-01-21 2020-01-21 Method for changing interface background color and display equipment Pending CN111291219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010069609.XA CN111291219A (en) 2020-01-21 2020-01-21 Method for changing interface background color and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010069609.XA CN111291219A (en) 2020-01-21 2020-01-21 Method for changing interface background color and display equipment

Publications (1)

Publication Number Publication Date
CN111291219A true CN111291219A (en) 2020-06-16

Family

ID=71030697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010069609.XA Pending CN111291219A (en) 2020-01-21 2020-01-21 Method for changing interface background color and display equipment

Country Status (1)

Country Link
CN (1) CN111291219A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351203A (en) * 2020-10-26 2021-02-09 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium
CN112668546A (en) * 2021-01-13 2021-04-16 海信视像科技股份有限公司 Video thumbnail display method and display equipment
CN113178228A (en) * 2021-05-25 2021-07-27 郑州中普医疗器械有限公司 Cell analysis method based on nuclear DNA analysis, computer device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353212A (en) * 2016-02-10 2018-07-31 谷歌有限责任公司 The dynamic color of user interface components for video player determines
CN109783178A (en) * 2019-01-24 2019-05-21 北京字节跳动网络技术有限公司 A kind of color adjustment method of interface assembly, device, equipment and medium
CN110602394A (en) * 2019-09-06 2019-12-20 北京达佳互联信息技术有限公司 Video shooting method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353212A (en) * 2016-02-10 2018-07-31 谷歌有限责任公司 The dynamic color of user interface components for video player determines
CN109783178A (en) * 2019-01-24 2019-05-21 北京字节跳动网络技术有限公司 A kind of color adjustment method of interface assembly, device, equipment and medium
CN110602394A (en) * 2019-09-06 2019-12-20 北京达佳互联信息技术有限公司 Video shooting method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351203A (en) * 2020-10-26 2021-02-09 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium
CN112668546A (en) * 2021-01-13 2021-04-16 海信视像科技股份有限公司 Video thumbnail display method and display equipment
CN113178228A (en) * 2021-05-25 2021-07-27 郑州中普医疗器械有限公司 Cell analysis method based on nuclear DNA analysis, computer device, and storage medium
CN113178228B (en) * 2021-05-25 2023-02-10 郑州中普医疗器械有限公司 Cell analysis method based on nuclear DNA analysis, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN111163274B (en) Video recording method and display equipment
CN112073797B (en) Volume adjusting method and display device
CN112533037B (en) Method for generating Lian-Mai chorus works and display equipment
CN112073798B (en) Data transmission method and equipment
CN112399212A (en) Display device, file sharing method and server
CN111291219A (en) Method for changing interface background color and display equipment
CN112073788A (en) Video data processing method and device and display equipment
CN112788422A (en) Display device
CN112073662A (en) Display device
CN112073770B (en) Display device and video communication data processing method
CN112399263A (en) Interaction method, display device and mobile terminal
CN113497884B (en) Dual-system camera switching control method and display equipment
CN112533023B (en) Method for generating Lian-Mai chorus works and display equipment
CN112533030B (en) Display method, display equipment and server of singing interface
CN113448529B (en) Display apparatus and volume adjustment method
CN112073666B (en) Power supply control method of display equipment and display equipment
CN112399071B (en) Control method and device for camera motor and display equipment
CN113938633A (en) Video call processing method and display device
CN112399235A (en) Method for enhancing photographing effect of camera of smart television and display device
CN112839254A (en) Display apparatus and content display method
CN113301404A (en) Display apparatus and control method
CN112073826B (en) Method for displaying state of recorded video works, server and terminal equipment
CN113727163B (en) Display device
CN112399223B (en) Method for improving moire fringe phenomenon and display device
CN113453079B (en) Control method for returning double-system-size double-screen application and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination