WO2022001635A1 - 一种显示设备及显示方法 - Google Patents
一种显示设备及显示方法 Download PDFInfo
- Publication number
- WO2022001635A1 WO2022001635A1 PCT/CN2021/099792 CN2021099792W WO2022001635A1 WO 2022001635 A1 WO2022001635 A1 WO 2022001635A1 CN 2021099792 W CN2021099792 W CN 2021099792W WO 2022001635 A1 WO2022001635 A1 WO 2022001635A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- image
- depth image
- mixed
- user
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Definitions
- the present application relates to the technical field of display devices, and in particular, to a display device and a display method.
- the application provides a display device, the display device includes:
- a camera for collecting a first depth image
- a display for displaying a user interface, and displaying a selector in the user interface for indicating that an item is selected in the user interface;
- a controller connected to the display and the camera, respectively, the controller is configured as:
- FIG. 1 exemplarily shows a schematic diagram of an operation scene between a display device and a control apparatus according to some embodiments
- FIG. 2 exemplarily shows a hardware configuration block diagram of a display device 200 according to some embodiments
- FIG. 3 exemplarily shows a hardware configuration block diagram of the control apparatus 100 according to some embodiments
- FIG. 4 exemplarily shows a schematic diagram of software configuration in the display device 200 according to some embodiments
- FIG. 5 exemplarily shows a schematic diagram of displaying the icon control interface of the application in the display device 200 according to some embodiments
- FIG. 6 exemplarily shows a schematic diagram of an AR hybrid call according to some embodiments
- FIG. 7 exemplarily shows a schematic diagram of a hybrid call interaction according to some embodiments.
- FIG. 8 exemplarily shows a schematic diagram of a video call interface according to some embodiments.
- FIG. 9 exemplarily shows a schematic diagram of a hybrid call interface according to some embodiments.
- FIG. 10 exemplarily shows a schematic diagram of a hybrid call interface according to other embodiments.
- FIG. 11 exemplarily shows a schematic flowchart of a video calling method according to some embodiments.
- FIG. 12 is a rear view of a display device in some embodiments of the present application.
- FIG. 13 is a block diagram of a hardware configuration of a control device in some embodiments of the present application.
- FIG. 14 is a block diagram of a hardware configuration of a display device in some embodiments of the present application.
- 15 is a block diagram of the architecture configuration of the operating system in the display device memory in some embodiments of the present application.
- 16A is a schematic diagram of a landscape screen state of a display device in some embodiments of the present application.
- 16B is a schematic diagram of a vertical screen state of a display device in some embodiments of the present application.
- 17A is a schematic flowchart of a rotation control method in some embodiments of the present application.
- 17B is a schematic diagram of a touch rotation process in some embodiments of the present application.
- 18A is a schematic flowchart of controlling and displaying a prompt UI interface in some embodiments of the present application.
- 18B is a schematic diagram of a prompt UI interface in some embodiments of the present application.
- 19A is a schematic flowchart of determining whether a touch action and a preset rotation action are the same in some embodiments of the present application;
- 19B is a schematic diagram of touch actions in some embodiments of the present application.
- 20 is a schematic flowchart of controlling the rotation component to adjust the rotation state of the display in some embodiments of the present application
- 21 is a schematic flowchart of controlling the rotation of the rotating component according to the bending angle in some embodiments of the present application.
- FIG. 22 is a schematic structural diagram of a display device in some embodiments of the present application.
- module refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic or combination of hardware or/and software code capable of performing the function associated with that element.
- remote control refers to a component of an electronic device, such as the display device disclosed in this application, that can wirelessly control the electronic device, usually over a short distance.
- infrared and/or radio frequency (RF) signals and/or Bluetooth are used to connect with electronic devices, and functional modules such as WiFi, wireless USB, Bluetooth, and motion sensors may also be included.
- RF radio frequency
- a hand-held touch remote control replaces most of the physical built-in hard keys in a general remote control device with a user interface in a touch screen.
- gesture used in this application refers to a user's behavior that is used by a user to express an expected thought, action, purpose/or result through an action such as a change of hand shape or hand movement.
- FIG. 1 exemplarily shows a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment.
- a user may operate the display apparatus 200 through the mobile terminal 300 and the control apparatus 100 .
- control device 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or Bluetooth protocol communication, and other short-range communication methods, etc., and controls the display device 200 by wireless or other wired methods.
- the user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, and the like.
- the user can control the display device 200 by inputting corresponding control commands through the volume up/down key, channel control key, up/down/left/right movement keys, voice input key, menu key, power-on/off key, etc. on the remote control. function.
- mobile terminals, tablet computers, computers, notebook computers, and other smart devices may also be used to control the display device 200 .
- the display device 200 is controlled using an application running on the smart device.
- the app can be configured to provide users with various controls in an intuitive user interface (UI) on the screen associated with the smart device.
- UI intuitive user interface
- the mobile terminal 300 may install a software application with the display device 200 to implement connection communication through a network communication protocol, so as to achieve the purpose of one-to-one control operation and data communication.
- a control command protocol can be established between the mobile terminal 300 and the display device 200
- the remote control keyboard can be synchronized to the mobile terminal 300
- the function of controlling the display device 200 can be realized by controlling the user interface on the mobile terminal 300.
- the audio and video content displayed on the mobile terminal 300 may also be transmitted to the display device 200 to implement a synchronous display function.
- the display device 200 also performs data communication with the server 400 through various communication methods.
- the display device 200 may be allowed to communicate via local area network (LAN), wireless local area network (WLAN), and other networks.
- the server 400 may provide various contents and interactions to the display device 200 .
- the display device 200 interacts by sending and receiving information, and electronic program guide (EPG), receiving software program updates, or accessing a remotely stored digital media library.
- EPG electronic program guide
- the server 400 may be a cluster or multiple clusters, and may include one or more types of servers. Other network service contents such as video-on-demand and advertising services are provided through the server 400 .
- the display device 200 may be a liquid crystal display, an OLED display, or a projection display device.
- the specific display device type, size and resolution are not limited. Those skilled in the art can understand that the display device 200 can make some changes in performance and configuration as required.
- the display device 200 may additionally provide a smart IPTV function that provides computer-supported functions, including but not limited to, IPTV, smart TV, Internet Protocol TV (IPTV), and the like, in addition to the broadcast receiving TV function.
- a smart IPTV function that provides computer-supported functions, including but not limited to, IPTV, smart TV, Internet Protocol TV (IPTV), and the like, in addition to the broadcast receiving TV function.
- FIG. 2 exemplarily shows a block diagram of the hardware configuration of the display device 200 according to the exemplary embodiment.
- the display device 200 includes a controller 250, a tuner 210, a communicator 220, a detector 230, an input/output interface 255, a display 275, an audio output interface 285, a memory 260, a power supply 290, At least one of the user interface 265 and the external device interface 240 .
- the display 275 for receiving the image signal from the output of the first processor, performs components for displaying video content and images and a menu manipulation interface.
- the display 275 includes a display screen component for presenting pictures, and a driving component for driving image display.
- the video content displayed may be from broadcast television content or various broadcast signals that may be received via wired or wireless communication protocols.
- various image contents sent from the network server side can be displayed and received from the network communication protocol.
- display 275 is used to present a user-manipulated UI interface generated in display device 200 and used to control display device 200 .
- a driving component for driving the display is also included.
- display 275 is a projection display, and may also include a projection device and projection screen.
- communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types.
- the communicator may include at least one of a Wifi chip, a Bluetooth communication protocol chip, a wired Ethernet communication protocol chip and other network communication protocol chips or a near field communication protocol chip, and an infrared receiver.
- the display apparatus 200 may establish control signal and data signal transmission and reception between the communicator 220 and the external control apparatus 100 or the content providing apparatus.
- the user interface 265 may be used to receive infrared control signals from the control device 100 (eg, an infrared remote control, etc.).
- the detector 230 is a signal used by the display device 200 to collect the external environment or interact with the outside.
- the detector 230 includes a light receiver, a sensor for collecting ambient light intensity, and can adaptively display parameter changes and the like by collecting ambient light.
- the detector 230 may also include an image collector, such as a camera, a camera, etc., which can be used to collect external environment scenes, and used to collect user attributes or interactive gestures with the user, and can adaptively change display parameters, User gestures can also be recognized to implement functions that interact with users.
- an image collector such as a camera, a camera, etc., which can be used to collect external environment scenes, and used to collect user attributes or interactive gestures with the user, and can adaptively change display parameters, User gestures can also be recognized to implement functions that interact with users.
- detector 230 may also include a temperature sensor or the like, such as by sensing ambient temperature.
- the display device 200 can adaptively adjust the display color temperature of the image. For example, when the temperature is relatively high, the display device 200 can be adjusted to display a relatively cool color temperature of the image, or when the temperature is relatively low, the display device 200 can be adjusted to display a warmer color of the image.
- the detector 230 may also be a sound collector or the like, such as a microphone, which may be used to receive the user's voice. Exemplarily, it includes a voice signal of a control instruction of the user to control the display device 200, or collects ambient sounds, and is used to identify the type of the ambient scene, so that the display device 200 can adapt to the ambient noise.
- the input/output interface 255 is configured to enable data transfer between the controller 250 and other external devices or other controllers 250 . Such as receiving video signal data and audio signal data of external equipment, or command instruction data, etc.
- the external device interface 240 may include, but is not limited to, the following: any one or more of a high-definition multimedia interface HDMI interface, an analog or data high-definition component input interface, a composite video input interface, a USB input interface, an RGB port, etc. interface. It is also possible to form a composite input/output interface by a plurality of the above-mentioned interfaces.
- the tuner and demodulator 210 is configured to receive broadcast television signals through wired or wireless reception, and can perform modulation and demodulation processing such as amplification, frequency mixing, and resonance, and can perform modulation and demodulation processing from multiple wireless receivers.
- the audio and video signal may include the TV audio and video signal carried in the frequency of the TV channel selected by the user, and the EPG data signal.
- the frequency demodulated by the tuner-demodulator 210 is controlled by the controller 250, and the controller 250 can send a control signal according to the user's selection, so that the modem responds to the user-selected TV signal frequency and modulates and demodulates the frequency.
- broadcast television signals may be classified into terrestrial broadcast signals, cable broadcast signals, satellite broadcast signals, or Internet broadcast signals, etc. according to different broadcast formats of the television signals. Or according to different modulation types, it can be divided into digital modulation signal, analog modulation signal, etc. Or, it can be divided into digital signals, analog signals, etc. according to different types of signals.
- the controller 250 and the tuner 210 may be located in different separate devices, that is, the tuner 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box Wait.
- the set-top box outputs the modulated and demodulated television audio and video signals of the received broadcast television signals to the main device, and the main device receives the audio and video signals through the first input/output interface.
- the controller 250 controls the operation of the display device and responds to user operations.
- the controller 250 may control the overall operation of the display apparatus 200 .
- the controller 250 may perform an operation related to the object selected by the user command.
- the object may be any of the selectable objects, such as a hyperlink or an icon.
- Operations related to the selected object such as displaying operations linked to hyperlinked pages, documents, images, etc., or executing operations corresponding to the icon.
- the user command for selecting the UI object may be an input command through various input devices (eg, a mouse, a keyboard, a touchpad, etc.) connected to the display device 200 or a voice command corresponding to a voice spoken by the user.
- the controller 250 includes a random access memory 251 (Random Access Memory, RAM), a read-only memory 252 (Read-Only Memory, ROM), a video processor 270, an audio processor 280, and other processors 253 (For example: at least one of graphics processing unit (Graphics Processing Unit, GPU), central processing unit 254 (Central Processing Unit, CPU), communication interface (Communication Interface), and communication bus 256 (Bus). Wherein, the communication bus Connect the parts.
- RAM 251 is used to store temporary data for the operating system or other running programs.
- ROM 252 is used to store various system startup instructions.
- ROM 252 is used to store a basic input output system, called a Basic Input Output System (BIOS). It is used to complete the power-on self-check of the system, the initialization of each functional module in the system, the driver program of the basic input/output of the system, and the boot operating system.
- BIOS Basic Input Output System
- the power supply of the display device 200 starts to start, and the CPU executes the system start-up instruction in the ROM 252, and copies the temporary data of the operating system stored in the memory to the RAM 251, so as to facilitate startup or operation operating system.
- the CPU copies the temporary data of various application programs in the memory to the RAM 251, so as to facilitate starting or running various application programs.
- the CPU processor 254 executes operating system and application program instructions stored in memory. And according to various interactive instructions received from the external input, various applications, data and contents are executed, so as to finally display and play various audio and video contents.
- CPU processor 254 may include multiple processors.
- the plurality of processors may include a main processor and one or more sub-processors.
- the main processor is used to perform some operations of the display device 200 in the pre-power-on mode, and/or an operation of displaying a picture in the normal mode.
- One or more sub-processors for an operation in a state such as standby mode.
- the graphics processor 253 is used to generate various graphic objects, such as: icons, operation menus, and user input instructions to display graphics and the like. It includes an operator, which performs operations by receiving various interactive instructions input by the user, and displays various objects according to the display properties. and includes a renderer, which renders various objects obtained based on the operator, and the rendered objects are used for displaying on a display.
- various graphic objects such as: icons, operation menus, and user input instructions to display graphics and the like. It includes an operator, which performs operations by receiving various interactive instructions input by the user, and displays various objects according to the display properties. and includes a renderer, which renders various objects obtained based on the operator, and the rendered objects are used for displaying on a display.
- the video processor 270 is configured to receive the external video signal and perform decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, etc. according to the standard codec protocol of the input signal. For video processing, a signal that can be directly displayed or played on the display device 200 can be obtained.
- the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like.
- the demultiplexing module is used for demultiplexing the input audio and video data stream. For example, if MPEG-2 is input, the demultiplexing module demultiplexes it into video signals and audio signals.
- the video decoding module is used to process the demultiplexed video signal, including decoding and scaling.
- the image synthesizing module such as an image synthesizer, is used for superimposing and mixing the GUI signal generated by the graphics generator according to the user's input or itself, and the zoomed video image, so as to generate an image signal that can be displayed.
- the frame rate conversion module is used to convert the input video frame rate, such as converting 60Hz frame rate to 120Hz frame rate or 240Hz frame rate.
- the usual format is implemented by means of frame insertion.
- the display formatting module is used for converting the received frame rate into the video output signal, and changing the signal to conform to the display format signal, such as outputting the RGB data signal.
- the graphics processor 253 may be integrated with the video processor, or may be separately configured.
- the processing of the graphics signal output to the display may be performed.
- different functions may be performed respectively. For example, GPU+FRC (Frame Rate Conversion)) architecture.
- the audio processor 280 is configured to receive an external audio signal, perform decompression and decoding, and noise reduction, digital-to-analog conversion, and amplification processing according to a standard codec protocol of the input signal to obtain a The sound signal played in the speaker.
- the video processor 270 may comprise one or more chips.
- the audio processor may also include one or more chips.
- the video processor 270 and the audio processor 280 may be separate chips, or may be integrated into one or more chips together with the controller.
- the audio output under the control of the controller 250, receives the sound signal output by the audio processor 280, such as the speaker 286, and in addition to the speaker carried by the display device 200 itself, can output the sound output to the external device
- the external audio output terminal of the device such as an external audio interface or an earphone interface, etc., may also include a short-range communication module in the communication interface, such as a Bluetooth module for outputting sound from a Bluetooth speaker.
- the power supply 290 under the control of the controller 250, provides power supply support for the display device 200 with the power input from the external power supply.
- the power supply 290 may include a built-in power supply circuit installed inside the display device 200 , or may be an external power supply installed in the display device 200 to provide an external power supply interface in the display device 200 .
- the user interface 265 is used for receiving user input signals, and then sending the received user input signals to the controller 250 .
- the user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
- the user inputs user commands through the control device 100 or the mobile terminal 300 , the user input interface is based on the user's input, and the display device 200 responds to the user's input through the controller 250 .
- the user may input user commands on a graphical user interface (GUI) displayed on the display 275, and the user input interface receives the user input commands through the graphical user interface (GUI).
- GUI graphical user interface
- the user may input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through a sensor to receive the user input command.
- a "user interface” is a medium interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user.
- the commonly used form of user interface is Graphical User Interface (GUI), which refers to a user interface related to computer operations displayed in a graphical manner. It can be an icon, window, control and other interface elements displayed on the display screen of the electronic device, wherein the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. visual interface elements.
- GUI Graphical User Interface
- the memory 260 includes storing various software modules for driving the display device 200 .
- various software modules stored in the first memory include at least one of a basic module, a detection module, a communication module, a display control module, a browser module, and various service modules.
- the basic module is used for signal communication between various hardwares in the display device 200, and is a low-level software module that sends processing and control signals to the upper-layer module.
- the detection module is a management module used to collect various information from various sensors or user input interfaces, perform digital-to-analog conversion, and analyze and manage.
- the speech recognition module includes a speech parsing module and a speech instruction database module.
- the display control module is a module used to control the display to display image content, and can be used to play information such as multimedia image content and UI interface.
- Communication module a module for control and data communication with external devices.
- the browser module is a module for performing data communication between browsing servers. Service modules are used to provide various services and modules including various applications.
- the memory 260 is also used to store and receive external data and user data, images of various items in various user interfaces, and visual effect diagrams of focus objects, and the like.
- FIG. 3 exemplarily shows a configuration block diagram of the control apparatus 100 according to an exemplary embodiment.
- the control device 100 includes a controller 110 , a communication interface 130 , a user input/output interface, a memory, and a power supply.
- the control apparatus 100 is configured to control the display device 200 , and can receive the user's input operation instructions, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, so as to play an interactive intermediary role between the user and the display device 200 .
- the user operates the channel addition and subtraction keys on the control device 100, and the display device 200 responds to the channel addition and subtraction operation.
- control apparatus 100 may be a smart device.
- control apparatus 100 may install various applications for controlling the display device 200 according to user requirements.
- the mobile terminal 300 or other intelligent electronic device can perform a similar function of the control apparatus 100 after installing the application for operating the display device 200 .
- the user can install the application, various function keys or virtual buttons of the graphical user interface available on the mobile terminal 300 or other intelligent electronic devices, so as to realize the function of the physical key of the control apparatus 100 .
- the controller 110 includes a processor 112 and RAM 113 and ROM 114, a communication interface 130, and a communication bus.
- the controller is used to control the operation and operation of the control device 100, as well as the communication cooperation between the internal components and the external and internal data processing functions.
- the communication interface 130 realizes the communication of control signals and data signals with the display device 200 under the control of the controller 110 .
- the received user input signal is sent to the display device 200 .
- the communication interface 130 may include at least one of other near field communication modules such as a WiFi chip 131 , a Bluetooth module 132 , and an NFC module 133 .
- the user input/output interface 140 wherein the input interface includes at least one of other input interfaces such as a microphone 141, a touch panel 142, a sensor 143, and a key 144.
- the user can implement the user command input function through actions such as voice, touch, gesture, pressing, etc.
- the input interface converts the received analog signal into a digital signal, and converts the digital signal into a corresponding command signal, and sends it to the display device 200.
- the output interface includes an interface for transmitting received user instructions to the display device 200 .
- it can be an infrared interface or a radio frequency interface.
- the infrared signal interface when used, the user input instruction needs to be converted into an infrared control signal according to the infrared control protocol, and sent to the display device 200 through the infrared sending module.
- the radio frequency signal interface when a radio frequency signal interface is used, the user input command needs to be converted into a digital signal, and then modulated according to the radio frequency control signal modulation protocol, and then sent to the display device 200 by the radio frequency transmission terminal.
- control device 100 includes at least one of a communication interface 130 and an input-output interface 140 .
- the control device 100 is configured with a communication interface 130, such as modules such as WiFi, Bluetooth, NFC, etc., which can send user input instructions to the display device 200 through WiFi protocol, Bluetooth protocol, or NFC protocol encoding.
- the memory 190 is used to store various operating programs, data and applications for driving and controlling the control device 200 under the control of the controller.
- the memory 190 can store various control signal instructions input by the user.
- the power supply 180 is used to provide operating power support for each element of the control device 100 under the control of the controller. Can battery and related control circuit.
- a system may include a kernel (Kernel), a command parser (shell), a file system, and applications.
- kernel Kernel
- shell command parser
- file system a file system
- applications the kernel, shell, and file system make up the basic operating system structures that allow users to manage files, run programs, and use the system.
- the kernel starts, activates the kernel space, abstracts hardware, initializes hardware parameters, etc., runs and maintains virtual memory, scheduler, signals and inter-process communication (IPC).
- IPC inter-process communication
- the shell and user applications are loaded.
- An application is compiled into machine code after startup, forming a process.
- the system is divided into four layers, from top to bottom, they are an application layer (referred to as “application layer”), an application framework layer (referred to as “framework layer”) ”), the Android runtime and the system library layer (referred to as the “system runtime layer”), and the kernel layer.
- application layer an application layer
- frame layer an application framework layer
- Android runtime the Android runtime
- system library layer the system library layer
- kernel layer the kernel layer
- At least one application program runs in the application program layer, and these application programs may be a Window program, a system setting program, a clock program, a camera application, etc. built into the operating system; they may also be developed by a third party
- the application programs developed by the author such as the Hijian program, the K song program, the magic mirror program, etc.
- the application package in the application layer is not limited to the above examples, and may actually include other application packages, which is not limited in this embodiment of the present application.
- the framework layer provides an application programming interface (API) and a programming framework for applications in the application layer.
- the application framework layer includes some predefined functions.
- the application framework layer is equivalent to a processing center, which decides to let the applications in the application layer take action.
- the application program can access the resources in the system and obtain the services of the system during execution through the API interface.
- the application framework layer in the embodiment of the present application includes managers (Managers), content providers (Content Provider), etc., wherein the manager includes at least one of the following modules: an activity manager (Activity Manager) uses Interacts with all activities running in the system; Location Manager is used to provide system services or applications with access to system location services; Package Manager is used to retrieve files currently installed on the device Various information related to the application package; Notification Manager (Notification Manager) is used to control the display and clearing of notification messages; Window Manager (Window Manager) is used to manage icons, windows, toolbars, wallpapers on the user interface and desktop widgets.
- an activity manager uses Interacts with all activities running in the system
- Location Manager is used to provide system services or applications with access to system location services
- Package Manager is used to retrieve files currently installed on the device Various information related to the application package
- Notification Manager Notification Manager
- Window Manager Window Manager
- the activity manager is used to: manage the life cycle of each application and the usual navigation and fallback functions, such as controlling the exit of the application (including switching the user interface currently displayed in the display window to the system desktop), opening the , back (including switching the currently displayed user interface in the display window to the upper-level user interface of the currently displayed user interface), and the like.
- the window manager is used to manage all window programs, such as obtaining the size of the display screen, judging whether there is a status bar, locking the screen, taking screenshots, and controlling the change of the display window (for example, reducing the display window to display, shaking display, twisting deformation display, etc.), etc.
- the system runtime layer provides support for the upper layer, that is, the framework layer.
- the Android operating system will run the C/C++ library included in the system runtime layer to implement the functions to be implemented by the framework layer.
- the kernel layer is the layer between hardware and software. As shown in Figure 4, the kernel layer at least includes at least one of the following drivers: audio driver, display driver, Bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor, touch sensors, pressure sensors, etc.), etc.
- the kernel layer at least includes at least one of the following drivers: audio driver, display driver, Bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor, touch sensors, pressure sensors, etc.), etc.
- the kernel layer further includes a power driver module for power management.
- software programs and/or modules corresponding to the software architecture in FIG. 4 are stored in the first memory or the second memory shown in FIG. 2 or FIG. 3 .
- the remote control receiving device receives the input operation of the remote control
- the corresponding hardware interrupt is sent to the kernel layer.
- the kernel layer processes the input operation into the original input event (including the value of the input operation, the timestamp of the input operation and other information).
- Raw input events are stored at the kernel layer.
- the application framework layer obtains the original input event from the kernel layer, identifies the control corresponding to the input event according to the current position of the focus, and regards the input operation as a confirmation operation, and the control corresponding to the confirmation operation is the control of the magic mirror application icon.
- the mirror application calls the interface of the application framework layer, starts the mirror application, and then starts the camera driver by calling the kernel layer to capture still images or videos through the camera.
- the display device receives an input operation (such as a split-screen operation) performed by the user on the display screen, and the kernel layer can generate corresponding input operations according to the input operation. Enter an event and report the event to the application framework layer.
- the window mode (such as multi-window mode) and window position and size corresponding to the input operation are set by the activity manager of the application framework layer.
- the window management of the application framework layer draws the window according to the settings of the activity manager, and then sends the drawn window data to the display driver of the kernel layer, and the display driver displays the corresponding application interface in different display areas of the display screen.
- the application layer contains at least one application that can display corresponding icon controls in the display, such as: live TV application icon control, video on demand application icon control, media center application Program icon controls, application center icon controls, game application icon controls, etc.
- the live TV application may provide live TV from different sources.
- a live TV application may provide a TV signal using input from cable, over-the-air, satellite services, or other types of live TV services.
- the live TV application may display the video of the live TV signal on the display device 200 .
- a video-on-demand application may provide video from various storage sources. Unlike live TV applications, video-on-demand provides a display of video from certain storage sources. For example, video-on-demand can come from the server side of cloud storage, from local hard disk storage containing existing video programs.
- the media center application may provide various multimedia content playback applications.
- a media center may provide services other than live TV or video-on-demand, where users can access various images or audio through a media center application.
- the application center may provide storage of various applications.
- An application can be a game, an application, or some other application that is related to a computer system or other device but can be run on a Smart TV.
- the application center can obtain these applications from various sources, store them in local storage, and then run them on the display device 200 .
- the hardware or software architecture in some embodiments may be based on the introduction in the foregoing embodiments, and may be based on other similar hardware or software architectures in some embodiments, and the technical solutions of the present application may be implemented.
- the image capturer of the display device 200 may include a camera, and the user may make a video call with a user using another display device 200 through a video call type application installed on the display device 200 .
- the call interface displayed by the display device 200 includes two windows, and the images collected by the devices of both parties of the video call are respectively displayed in different windows of the call interface.
- the backgrounds of the characters in the two windows are usually different, and the characters on both sides of the video call are in two different backgrounds.
- the present application provides a hybrid call solution based on AR technology.
- the hybrid call solution is based on the camera of the display device 200 being a 3D camera module, which can realize AR hybrid call.
- the 3D camera module may include a 3D camera and other cameras, such as a wide-angle camera, a macro camera, a main camera, etc.; in other embodiments, the 3D camera module may also only include a 3D camera.
- FIG. 6 it is a schematic diagram of an AR hybrid call according to some embodiments.
- the 3D camera modules of the two display devices 200 collect depth images respectively, and upload the depth images collected respectively to the server, and the server can mix the characters of both sides of the video call on the same background according to the two depth images. , so that both display devices 200 can display images of the characters of both parties in the same background, which improves the video chat experience.
- FIG. 7 it is a schematic diagram of a hybrid call interaction according to some embodiments.
- the calling end and the called end can conduct a mixed call through the server, wherein the display device 200 that sends the mixed call request It may be called the calling end, and the display device 200 that receives the mixed call request may be called the called end.
- the video call application also has the function of voice call and the function of switching from voice call to video call. Therefore, the hybrid call solution provided by the embodiments of the present application can also be applied to voice call scenarios, enabling users to switch from voice call to video call. for mixed calls.
- the video call interface includes two windows, one of which displays the character and background of the calling terminal, and the other window displays the character and background of the called terminal.
- the user of the calling end can be called the first character
- the user of the called end can be called the second character.
- the backgrounds of the first character and the second character are usually different.
- the vertical stripes are used to represent the background of the second character.
- the horizontal stripes are usually the environment where the first character is located
- the vertical stripes are usually the environment where the second character is located.
- the controller of the display device 200 may query whether AR hybrid calling is supported after the video calling application is started. According to the enabling conditions of the 3D camera module in the video calling application, it can be determined that AR hybrid calling is supported.
- the enabling conditions may include that the display device 200 has a 3D camera module, the video calling application has the permission to use the 3D camera module, and the 3D camera module works normally. If the display device 200 detects that the video call application has the enabling conditions for the 3D camera module, as shown in FIG. 8 , the display can be controlled to display the hybrid call control on the video call interface.
- the name of the hybrid call control can be "AR call”
- the trigger method of the hybrid call control can be voice trigger, click trigger, etc.
- the trigger signal of the hybrid call control can be the control signal used to indicate the hybrid call.
- the control signal indicating the mixed call may also be other signals, such as a preset gesture signal, a double-click signal at any position on the screen, and the like.
- the controller of the display device 200 can directly display the mixed call control as shown in FIG. 8 after the video call application is started, and then detects the video call application after receiving the control signal for indicating the mixed call. Whether the program has the enabling conditions for the 3D camera module.
- the display device 200 After the user inputs a control signal for indicating a mixed call on the display device 200 by clicking the hybrid call control on the display device 200, the display device 200 becomes the calling end and the user becomes the first user.
- the calling end Since in some embodiments, the calling end displays the mixed call control after detecting that the AR mixed call is supported, therefore, after receiving the control signal for indicating the mixed call, the calling end can directly generate a mixed call request by The server sends the mixed call request to the called end, which saves the detection time of the 3D camera module; and in some embodiments, the calling end does not detect whether AR mixing is supported before receiving the control signal for indicating the mixed call
- the above enabling conditions may also change at any time. For example, the user has turned off the use permission of the 3D camera module. Therefore, the calling end can detect whether the AR hybrid call is supported after the user inputs the control signal used to indicate the hybrid call. Make sure that the 3D camera module of the calling end can be enabled normally.
- the calling end After the calling end detects that the 3D camera module can be enabled normally, it will generate a mixed call request, send the mixed call to the server, and the server can send the mixed call request to the called end. , so as to query whether the called end supports and accepts AR hybrid calls.
- the called end After receiving the mixed call request, the called end can query whether it supports AR mixed call. According to the enabling conditions of the 3D camera module in the video calling application, it can be determined that AR hybrid calling is supported. If the video calling application does not support AR hybrid calling, it can feed back the signal that does not support AR hybrid calling to the server, and the server sends the signal that does not support AR hybrid calling to the calling end for the calling end to display that the other party does not support AR hybrid calling prompt information. If the video calling application supports AR hybrid calling, the second prompt information is generated, and the display is controlled to display the second prompt information.
- the second prompt information can include a prompt box and a selection control.
- the content of the prompt box can be information indicating whether to accept the mixed call, such as "Confirm to make a mixed call?".
- the number of selection controls can be two. The user at the called end accepts the mixed call, and another response to the trigger indicates that the user at the called end rejects the mixed call.
- the called terminal When the called terminal receives the control signal corresponding to the second prompt information input by the user, and the control signal is a signal for rejecting the mixed call, the called terminal generates a rejection signal and sends the rejection signal to the server, and the server can reject the rejection The signal is forwarded to the calling end.
- the calling end generates and controls the display to display the third prompt message according to the reception of the rejection signal.
- the third prompt information may include a prompt box, and the content of the prompt box may be information prompting the other party to reject the mixed call, such as "the other party has rejected the mixed call".
- the called end When the called end receives the control signal corresponding to the second prompt information input by the user, and the control signal is a signal for accepting the mixed call, the called end generates a confirmation signal and sends the confirmation signal to the server.
- the server may directly forward the confirmation signal to the calling terminal, and the calling terminal may control the 3D camera module to collect the first depth image according to the received confirmation signal, and send the first depth image to the server.
- the call between the two users is a voice call. If the user touches the mixed call control by mistake and the other party accepts the mixed call request, the calling end directly starts the call according to the confirmation signal.
- the 3D camera module may expose the privacy of the caller, or the caller does not touch the hybrid call control by mistake, and really wants to establish a hybrid call connection, but the caller is not ready to turn on the camera, in order to protect the caller's Privacy, the server can send a first prompt signal to the calling terminal according to the confirmation signal of the called terminal, and the calling terminal generates and controls the display to display the first prompt information according to the received first prompt signal.
- the first prompt information may include a prompt box and a selection control, and the content of the prompt box may be information prompting whether to accept the hybrid call, such as "Are you sure to perform the hybrid call?", the number of selection controls may be two, and one indicates when a response is triggered.
- the user at the calling end confirms the mixed call, and another response when triggered indicates that the user at the calling end cancels the mixed call.
- the video calling application of the calling end controls the 3D camera module to collect the first depth image and sends the first depth image to the server.
- the first depth image may include a point cloud containing depth information.
- the video call application of the calling end generates a mixed stream according to the first depth image, the audio collected by the microphone, and the video collected by other cameras of the 3D camera module and sends it to the server for the server to perform audio and video processing, such as portraits. Background blur, portrait beauty, sound effects settings, etc.
- the server may send a character depth image request to the called terminal to request spatial information, ie depth information, of the character at the called terminal.
- the called end can control the 3D camera module to collect a second depth image according to the received person depth image request.
- the second depth image can include a point cloud containing depth information, and the called end can extract the second depth image from the second depth image.
- the depth information of the character that is, the character space segmentation information, is sent to the server.
- the method for extracting the depth information of the second person from the second depth image includes: using a human body recognition algorithm to perform human body recognition on the second depth image, and recognizing the position of the second person in the second depth image; The background segmentation is performed at the position in the second depth image, so as to extract the depth information of the second person from the second depth image, so as to obtain the depth information of the second person.
- the server may send a depth image request to the called terminal to request the called terminal to provide the depth information of the called terminal.
- the called end can control the 3D camera module to collect the second depth image according to the received depth image request, send the second depth image to the server, and the server extracts the depth information of the second person from the second depth image.
- the server may render the second character into the first depth image according to the depth information of the second character and the depth information of the first depth image, obtain a first mixed image, and send the first mixed image to the called end and the calling party respectively end.
- the calling end and the called end respectively control their respective displays to display the first mixed image.
- FIG. 9 it is a schematic diagram of a hybrid call interface according to some embodiments. As shown in FIG. 9 , in the first mixed image, both the first character and the second character are in the same background, and the background is the real background of the first character.
- the server may perform audio and video processing on the first mixed image to obtain an AR mixed stream, and send the AR mixed stream to the calling end and the called end respectively, so that the calling end and the called end can display the processed images.
- First mix image and audio may be performed by the server.
- the hybrid call interface may be provided with a background switching control.
- the name of the control may be “switching background”.
- the server may The first mixed image is switched to the second mixed image shown in FIG. 10 , and the background of the second mixed image is the real background of the second character. Taking the user of the calling end triggering the switching background control as an example, the specific process of switching the background is as follows:
- the calling end When the user of the calling end inputs the control signal for instructing the background switching at the calling end by clicking the switch background control on the calling end, the calling end responds to receiving the control signal for instructing the background to switch to the calling end.
- the server sends a switch background request.
- the called end sends the second depth image to the server, and in some embodiments, the called end only sends the depth information of the second character to the server, and switching to the background of the second character requires Background depth information of the second character. Therefore, the server can determine whether there is background depth information of the second person, and if there is background depth information of the second person, the server can extract the depth information of the first person from the first depth image, and the extraction method is the same as that from the second depth image.
- the method of extracting the depth information of the second person is the same as in the 2nd person, rendering the first person into the second depth image to obtain the second mixed image; if the background depth information of the second person is not available, the server can send the depth image to the called end request to request the called end to provide the second depth image, and then extract the depth information of the first person from the first depth image, and render the first person into the second depth image to obtain the second mixed image.
- the server After generating the second mixed image, the server sends the second mixed image to the called end and the calling end respectively. After receiving the second mixed image, the calling end and the called end respectively control their respective displays to switch the first mixed image to the second mixed image.
- the interface of the second mixed image may retain a switch background control for the user to choose to switch the second mixed image to the first mixed image.
- an embodiment of the present application further provides a video call method.
- the video call method may include the following steps:
- Step S110 Send the mixed call request of the calling end to the called end.
- the server may send the mixed call request of the calling end to the called end.
- Step S120 Acquire a first depth image collected by the calling terminal according to the confirmation signal received from the called terminal.
- the server may send the confirmation signal to the calling terminal, so that the calling terminal can control the 3D camera module to collect the first Depth image, sending the first depth image to the server.
- the server may send a first prompt signal to the calling end, so that the calling end displays the first prompt information, and the calling end receives the corresponding message.
- the 3D camera module is controlled to collect the first depth image, and the first depth image is sent to the server.
- Step S130 Acquire the depth information of the second character of the called terminal.
- the server may send a person depth image request to the called terminal to obtain the depth information of the second person in the second depth image.
- Step S140 Render the second person into the first depth image according to the depth information of the second person to obtain a first mixed image.
- the server may render the second character to a suitable position in the first depth image, such as the same horizontal position as the first character, and adjust the size of the second character Comparable to the size of the first character, the first blended image is finally synthesized.
- Step S150 Send the first mixed image to the calling end and the called end respectively.
- the server sends the first mixed image to the calling terminal and the called terminal respectively, so that the calling terminal and the called terminal can display the first mixed image on their respective displays.
- the server may also receive a background switching request from the calling terminal or the called terminal, switch the first mixed image to the second mixed image, or switch the second mixed image to the first mixed image again.
- the embodiment of the present application further provides a server, which can be used to execute the above-mentioned video calling method.
- the embodiment of the present application collects the depth information of the two parties in the call through the 3D camera module, and renders the characters of one party to the depth image of the other party according to the depth information of the two parties to the call, so that the two parties in the call are in the same real background.
- the real-time display at the bottom of the screen solves the problem that the two characters on the call interface are in different backgrounds, and improves the user's video call experience.
- a rotating TV is a new type of smart TV, which mainly includes a display and rotating components.
- the display is fixed on the wall or bracket through the rotating component, and the display orientation can be adjusted through the rotating component, so as to be rotated to adapt to the display images of different aspect ratios.
- the monitor is placed in landscape orientation to display video images with aspect ratios such as 16:9 and 18:9.
- aspect ratio of the video image is 9:16, 9:18, etc.
- the monitor that is placed horizontally needs to scale the image, and black areas are displayed on both sides of the monitor. Therefore, the display can be rotated into a vertical position through the rotating component to adapt to video images of 9:16, 9:18 and other ratios.
- the rotatable display device used in the vertical video playback and vertical picture browsing scenarios can bring better user experience.
- the current screen rotation method in the field is usually driven by a remote control, voice, playing a video, or screen projection of a mobile phone. For example, by pressing the "rotate" key set on the remote control, the rotating component is driven to operate.
- these driving rotation methods need to rely on the cooperation of external devices, the operation is complicated, and the operation process depends on the prompts of the UI interface, which does not provide a more intuitive interactive experience.
- the embodiments of the present application provide a display device and a rotation control method, and the rotation control method can be configured in the display device.
- the display device may be a rotatable display device such as a rotating television, a computer, a tablet computer, or the like.
- FIG. 1 it is an application scenario diagram of a display device provided by some embodiments of the present application. As shown in FIG. 1 , communication between the control apparatus 100 and the display device 200 may be performed in a wired or wireless manner.
- the control device 100 is configured to control the display device 200 , which can receive operation instructions input by the user, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, acting as an intermediary for the interaction between the user and the display device 200 . effect.
- the user operates the channel addition and subtraction keys on the control device 100, and the display device 200 responds to the channel addition and subtraction operation.
- the control apparatus 100 may be a remote controller 100A, including infrared protocol communication or Bluetooth protocol communication, and other short-distance communication methods, etc., and controls the display device 200 by wireless or other wired methods.
- the user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, and the like.
- the user can control the display device 200 by inputting corresponding control commands through the volume up/down key, channel control key, up/down/left/right movement keys, voice input key, menu key, power-on/off key, etc. on the remote control. function.
- the control apparatus 100 may also be a smart device, such as a mobile terminal 100B, a tablet computer, a computer, a notebook computer, and the like.
- the display device 200 is controlled using an application running on the smart device.
- the app can be configured to provide users with various controls through an intuitive user interface (UI) on the screen associated with the smart device.
- UI intuitive user interface
- the mobile terminal 100B may install a software application with the display device 200, and implement connection communication through a network communication protocol, so as to achieve the purpose of one-to-one control operation and data communication.
- the mobile terminal 100B and the display device 200 can be made to establish a control instruction protocol, and by operating various function keys or virtual controls of the user interface provided on the mobile terminal 100B, the functions of the physical keys arranged by the remote control 100A can be realized.
- the audio and video content displayed on the mobile terminal 100B may also be transmitted to the display device 200 to implement a synchronous display function.
- the display apparatus 200 may provide a broadcast receiving function and a network TV function of a computer support function.
- the display device may be implemented as digital TV, Internet TV, Internet Protocol TV (IPTV), or the like.
- the display device 200 may be a liquid crystal display, an organic light emitting display, or a projection device.
- the specific display device type, size and resolution are not limited.
- the display device 200 also performs data communication with the server 300 through various communication methods.
- the display device 200 may be allowed to be communicatively connected through a local area network (LAN), a wireless local area network (WLAN), and other networks.
- the server 300 may provide various contents and interactions to the display device 200 .
- display device 200 may send and receive information, such as receiving electronic program guide (EPG) data, receiving software program updates, or accessing a remotely stored digital media library.
- EPG electronic program guide
- the server 300 may be in one group, or in multiple groups, or in one or more types of servers.
- Other network service contents such as video-on-demand and advertising services are provided through the server 300 .
- the display device 200 includes a rotating assembly 276, a controller 250, a display 275, a terminal interface 278 extending from a space on the backplane, and a rotating assembly 276 connected to the backplane.
- Assembly 276 may allow display 275 to rotate.
- the rotating component 276 can rotate the display to a vertical screen state, that is, a state where the vertical side length of the screen is greater than the horizontal side length, or can rotate the screen to a landscape screen state, that is, the horizontal side of the screen. A state where the length is longer than the vertical side length.
- FIG. 13 A configuration block diagram of the control device 100 is exemplarily shown in FIG. 13 .
- the control device 100 includes a controller 110 , a memory 120 , a communicator 130 , a user input interface 140 , a user output interface 150 , and a power supply 160 .
- the display apparatus 200 may include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, The rotating component 276 , the touch component 277 , the audio processor 280 , the audio output interface 285 , and the power supply 290 .
- the rotating assembly 276 may include components such as a drive motor, a rotating shaft, and the like.
- the drive motor can be connected to the controller 250, and is controlled by the controller 250 to output a rotation angle; one end of the rotating shaft is connected to the power output shaft of the drive motor, and the other end is connected to the display 275, so that the display 275 can be fixedly installed on the rotating assembly 276. on a wall or stand.
- the rotating assembly 276 may also include other components, such as transmission components, detection components, and the like.
- the transmission component can adjust the rotational speed and torque output by the rotating component 276 through a specific transmission ratio, which can be a gear transmission mode;
- the detection component can be composed of sensors arranged on the rotating shaft, such as an angle sensor, an attitude sensor, and the like. These sensors can detect parameters such as the rotation angle of the rotating component 276, and send the detected parameters to the controller 250, so that the controller 250 can judge or adjust the state of the display device 200 according to the detected parameters.
- the rotating assembly 276 may include, but is not limited to, one or more of the above components.
- the touch component 277, the touch component 277 can be arranged on the display screen of the display 275 to detect the touch action of the user.
- the controller 250 can acquire touch commands input by the user through the touch component 277, and respond to different control actions according to different touch commands.
- the touch command input by the user may include various forms according to different touch actions corresponding to the touch command. For example, tap, swipe, long press, etc. If the touch component 277 supports multi-touch, the form of touch commands can be further added, for example, two-finger click, two-finger slide, two-finger long press, three-finger click, three-finger slide, etc. Different forms of touch commands can represent different control actions. For example, a click action performed on an application icon may represent starting and running the application corresponding to the icon.
- the controller 250 includes a random access memory (RAM) 251 , a read only memory (ROM) 252 , a graphics processor 253 , a CPU processor 254 , a communication interface 255 , and a communication bus 256 .
- RAM random access memory
- ROM read only memory
- the RAM 251 , the ROM 252 , the graphics processor 253 , and the communication interface 255 of the CPU processor 254 are connected through a communication bus 256 .
- FIG. 15 exemplarily shows a block diagram of the architecture configuration of the operating system in the memory of the display device 200 .
- the operating system architecture is, from top to bottom, the application layer, the middleware layer and the kernel layer.
- the application layer built-in applications in the system and non-system-level applications belong to the application layer. Responsible for direct interaction with users.
- the application layer may include multiple applications, such as a settings application, an electronic post application, a media center application, and the like. These applications can be implemented as Web applications, which are executed based on the WebKit engine, and can be specifically developed and executed based on HTML5 (HyperText Markup Language), Cascading Style Sheets (CSS) and JavaScript.
- HTML5 HyperText Markup Language
- CSS Cascading Style Sheets
- JavaScript JavaScript
- the middleware layer can provide some standardized interfaces to support the operation of various environments and systems.
- the middleware layer may be implemented as the Multimedia and Hypermedia Information Coding Experts Group for Data Broadcasting Related Middleware
- MHEG which can also be implemented as DLNA middleware that communicates with external devices as middleware, and can also be implemented as middleware that provides a browser environment in which each application program in the display device runs.
- the kernel layer provides core system services, such as file management, memory management, process management, network management, and system security authority management.
- the kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on a Linux operating system.
- the kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: providing display drivers for monitors, camera drivers for cameras, button drivers for remote controls, and WIFI modules. Provide WiFi driver, audio driver for audio output interface, power management driver for power management (PM) module, etc.
- PM power management
- user interface 265 receives various user interactions. Specifically, it is used to send the user's input signal to the controller 250, or to transmit the output signal from the controller 250 to the user.
- the remote control 100A may send input signals input by the user, such as power switch signals, channel selection signals, volume adjustment signals, etc., to the user interface 265, and then the user interface 265 forwards them to the controller 250; or, the remote control 100A may The controller 250 processes an output signal such as audio, video or data output from the user interface 265, and displays the received output signal or outputs the received output signal in the form of audio or vibration.
- the rotation operation of the display device 200 refers to the process of adjusting the angle completed by driving the display 275 by the rotating component 276 to change the placement angle of the display 275 .
- the rotating assembly 276 can drive the display 275 to rotate on a vertical plane perpendicular to the ground, so that the display 275 can be in different rotation states.
- the rotation state is a plurality of specific states of the display 275, which can be set to various forms according to the posture of the display 275, for example, a horizontal screen state, a vertical screen state, a tilted state, and the like.
- the horizontal screen state and the vertical screen state are rotation states used by most users, and can be respectively applied to the horizontal screen scene and the vertical screen scene. Therefore, in some embodiments of the present application, the landscape screen state and the portrait screen state may be referred to as standard states.
- the tilted state is usually a state in which the display 275 does not rotate properly due to the failure of the rotating component 276 , and the user rarely rotates the display 275 to the tilted state deliberately. That is, in some embodiments of the present application, the inclined state may also be referred to as a non-standard state.
- the display content presented on the display 275 in different rotation states may be different, and the difference may be reflected in the specific playback screen content, UI interface layout, and the like.
- the landscape state of the device 200 is displayed in some embodiments of the present application as shown in FIG. 16A .
- An operation mode when the display 275 is in a landscape state may be referred to as a landscape media viewing mode
- an operation mode when the display 275 is in a portrait state may be referred to as a portrait media viewing mode.
- the rotating component 276 can fix the display device 200, and can drive the display 275 to rotate under the control of the controller 250, so that the display 275 is in different rotation states.
- the rotating assembly 276 can be fixed on the back of the display 275, and the rotating assembly 276 is used to be fixed to the wall.
- the rotation component 276 can receive a control instruction from the controller 250 to rotate the display 275 in a vertical plane, so that the display 275 is in a landscape state or a portrait state.
- the horizontal screen state refers to a state in which the length (width) in the horizontal direction of the display 275 is greater than the length (height) in the vertical direction when viewed from the front of the display 275; When viewed from the front, the length (width) in the horizontal direction of the display 275 is smaller than the length (height) in the vertical direction.
- the vertical direction in this application refers to approximately vertical
- the horizontal direction also refers to approximately horizontal.
- the rotation states other than the horizontal screen state and the vertical screen state are tilted states. In different tilted states, the rotation angle of the display 275 is also different.
- the display 275 can be rotated 90 degrees clockwise or counterclockwise to adjust the display 275 to a vertical screen state, as shown in FIG. 16B.
- the display 275 can display a user interface corresponding to the vertical screen state, and has an interface layout and interaction mode corresponding to the vertical screen state.
- the vertical screen media asset viewing mode users can watch vertical screen media assets such as short videos and comics. Since the controller 250 in the display device 200 is further connected in communication with the server 300, the media asset data corresponding to the vertical screen can be obtained by calling the interface of the server 300 in the vertical screen state.
- the horizontal screen state is mainly used to display horizontal media resources such as TV dramas and movies
- the vertical screen state is mainly used to display vertical media resources such as short videos and comics.
- the above-mentioned horizontal screen state and vertical screen state are just two different display states, and do not limit the displayed content.
- vertical media resources such as short videos and comics can still be displayed in the horizontal screen state
- Horizontal media resources such as TV series and movies can still be displayed, but in this state, the display windows that do not match need to be compressed and adjusted.
- some embodiments of the present application provide a rotation control method, which includes the following steps:
- the user can input touch commands for rotating the display 275 through the touch component 277 .
- the specific touch instruction form may be one or more of "click, slide, long press" according to the system UI interaction strategy.
- the touch command for rotating the display 275 may be more unique or complicated than other operations.
- single-finger tap, slide, and long-press touch commands are usually used for "starting a program", “moving a position” and “extending operation”, so the touch command used to rotate the display 275
- the control command may be a multi-finger click, slide, long press and other touch commands, so as to be distinguished from the single finger touch command, slide, and long press.
- the touch command for rotating the display 275 can be one or more of two-finger touch, three-finger touch, four-finger touch or five-finger touch.
- the touch command for rotating the display 275 can be set as a five-finger touch
- the touch command for rotating the display 275 can also be set as a multi-touch command, that is, when the user inputs two-finger touch, three-finger touch , four-finger touch and five-finger touch commands can all trigger the operation of rotating the display 275, as shown in FIG. 17B .
- the touch command for rotating the display 275 may include two partial actions, namely a touch action part and a rotation action part, wherein the touch action part is used to trigger the controller 250 to perform a touch corresponding to the touch command. Motion is detected to determine whether to initiate rotation.
- the rotation action part can be input after the touch action part to assist in determining whether to trigger the rotation and control the rotation mode of the rotation component 276, including the control of parameters such as the rotation direction and rotation angle.
- the controller 250 of the display device 200 can extract the touch action corresponding to the touch command in response to the touch command.
- the controller 250 extracts the touch actions corresponding to the touch commands in different ways.
- the controller 250 can directly extract the touch action by detecting the signal data corresponding to the touch command. For example, if the operating system sets the touch command for rotating the display 275 to draw an "O"-shaped pattern on the screen with one finger, the controller 250 needs to input the touch command before the user can input the touch command. The touch action is extracted.
- the detection of the rotation action part input after the touch action part can be triggered.
- the operating system sets the touch command for rotating the display 275 as a five-finger rotation action.
- the user may first input a five-finger touch command, that is, touch the screen with five fingers.
- the controller 250 detects a five-finger touch operation through the touch module 277, it can further start the detection program to detect the subsequent rotation action input by the user on the screen, and control the rotation process of the display 275 according to the specific rotation action.
- the controller 250 can also compare the extracted touch action with the preset rotation action, and if the touch action is the same as the preset rotation action, it is determined that the user wants to rotate the display 275, so it can control the The rotation assembly 276 is activated to adjust the rotation state of the display 275 .
- the controller 250 may first obtain the current rotation state of the display 275 . If the current rotation state of the display 275 is the landscape state, the rotation component 276 is controlled to adjust the display 275 to the portrait state; if the current rotation state of the display 275 is the portrait state, the rotation component 276 is controlled to adjust the display 275 to the landscape state.
- the controller 250 may send a control instruction to the rotating component 276, so that the rotating component 276 rotates according to the control instruction after receiving the control instruction.
- the control instructions may include some basic operating parameters for controlling the rotation of the rotating component 276, such as a rotation direction, a rotation angle, and the like.
- the specific value of the parameter in the control instruction can be determined according to the current rotation state and the specific rotation method. For example, when the display 275 is in the landscape state, if the touch action is the same as the preset rotation action, the rotation component 276 can be controlled to rotate 90 degrees clockwise. degrees to adjust the display 275 to the vertical screen state; similarly, when the display 275 is in the vertical screen state, if the touch action is the same as the preset rotation action, the rotation component 276 can be controlled to rotate 90 degrees counterclockwise to adjust the display 275 to landscape state.
- the specific value of the parameter in the control instruction can also be determined according to the touch action input by the user. For example, if the rotation motion input by the user is a clockwise motion, the rotation component 276 can be controlled to rotate 90 degrees clockwise to a corresponding rotation state. If the rotation action input by the user is a counterclockwise action, the rotation component 276 can be controlled to rotate 90 degrees counterclockwise to a corresponding rotation state.
- the rotation control method provided by the present application can extract the touch action corresponding to the touch command after the user inputs the touch command, and compare it with the preset rotation action. Therefore, when the touch action is the same as the preset rotation action, the rotation component 276 is controlled to start the rotation, so as to adjust the rotation state of the display 275 .
- a rotation control method for driving the display 275 based on gesture touch detection is implemented.
- the touch operation cannot be implemented on a specific UI interface like the traditional UI interface interaction operation, and the user is guided to complete the operation through text prompts, in practical applications, the user needs to remember the specific actions of the touch operation before it can be realized.
- Rotation of the display 275 which affects the user experience.
- the user may be guided to complete the touch interaction by displaying a prompt UI interface. That is, as shown in FIG. 18A , the step of acquiring the touch control instruction for rotating the display further includes:
- the display is controlled to display a prompt UI interface.
- the controller 250 can detect the number of touch points in the touch command through the touch component 277, and determine whether the number of touch points is equal to the preset judgment number, so as to control the display 275 to display a prompt UI interface.
- the prompt UI interface includes a pattern and/or text for indicating a rotation action.
- the preset number of judgments can be set according to the actual UI interaction rules of the operating system. When the number of touch points in the touch command by the user is different, different prompt UI interfaces can also be displayed.
- the user can simultaneously touch the screen with multiple fingers to input touch commands.
- the controller 250 detects the multi-finger touch operation of the user through the touch component 277, the number of touch points in the touch operation can be detected. If the number of touch points is 5, which is equal to the preset judgment value of 5, it is determined that the current user may input a touch instruction for rotating the display 275, so the display 275 can be controlled to display a UI interface for prompting the rotating operation, so as to prompt the UI
- the pattern and/or text on the interface prompts the user to complete the input of the subsequent rotation action.
- the user can also be prompted to complete the input through more intuitive content such as animation and video. It is suggested that the UI interface can be displayed on the top layer of the display screen in a semi-transparent manner, and the display state is kept in the process of the user touching the screen until the user completes the subsequent rotation action.
- a general function entry based on a unified touch operation may also be set in the control system of the display device 200 . That is, in any scenario, the user can call the prompt UI interface through the set touch action, and in the prompt UI interface, in the form of multiple graphics, text or animation, different interactive actions are indicated respectively, so that the user can follow the prompt UI interface. enter.
- the wake-up action of the general function entry can be defined as a five-finger touch instruction, and after the user touches the screen with five fingers, a prompt UI interface is displayed. Prompt that the function entry control can be set directly in the UI interface, so that when the user clicks the control, the corresponding function can be realized. It is suggested that animations of gestures corresponding to multiple functions can also be displayed in sequence in the UI interface. For example, the animation of multi-finger rotation indicates that the user can start the rotation component 276 to rotate through the multi-finger rotation instruction; the animation of sliding down with one finger indicates that the user can Refer to the slide command to view the message interface and so on.
- the finger and the touch screen are in surface contact, that is, a continuous contact area is formed between a finger and the screen, so the above number of touch points may refer to the number of touch points during the interaction process. The number of consecutive contact areas.
- the touch action input by the user can be determined by judging the sliding track of the touch point. That is, in some embodiments of the present application, as shown in FIG. 19A , the step of extracting the touch action corresponding to the touch command includes:
- S203 Compare the shape of the touch action track with the shape of the preset rotation action track, and generate a comparison result, so as to determine whether the touch action and the preset rotation action are the same according to the comparison result.
- a plane coordinate system can be constructed within the screen range of the display 275, so that any position on the touch component 277 can be represented by the constructed plane coordinate system.
- the touch position can be represented by the coordinates of the touch point.
- the detected coordinates of the continuous touch points can be used to represent the user's touch action trajectory.
- the detected motion trajectory may be graphic data composed of coordinates of multiple touch points.
- the shape of the touch action track By comparing the shape of the touch action track with the shape of the preset rotation action track, it can be determined whether the touch action and the preset rotation action are the same. Due to the different motion amplitudes when the user inputs the touch operation, the shape of the extracted touch motion trajectory also has various forms. In order to determine whether the touch motion is the same as the preset rotation motion, the shape type of the trajectory can be directly judged. If the shape of the touch action track and the shape of the preset rotation action track are the same shape, it is determined that the touch action is the same as the preset rotation action.
- the shape of the touch action track and the shape of the preset rotation action track are both "O" shaped, it is determined that the touch action is the same as the preset rotation action, and the user inputs a larger diameter "O" shape.
- Both the O" shape and the smaller “O” shape can be determined as inputting a preset rotation action, so as to control the rotation component 276 to drive the display 275 to rotate.
- the shape of the touch action track and the shape of the preset rotation action track are both "circle”, “rectangle”, “triangle”, “quadrilateral”, etc., it can also be determined that the touch action is the same as the preset rotation action. .
- the above embodiment can detect the shape of the touch action track formed by the coordinates of the consecutive touch points by traversing the touch point coordinates in the touch command, and compare the shape of the touch action track with the preset rotation. Whether the shape of the action track is the same shape, so as to determine whether the touch action and the preset rotation action are the same.
- This embodiment can improve the error tolerance rate of the system judgment process, so that the user can control the rotation process of the rotating component without inputting an action that is exactly the same as the preset rotation action.
- Steps also include:
- the rotation action instruction refers to a sliding action that the user can perform on the screen. Taking the five-finger touch method as an example, the user can input the rotation action instruction by sliding the arc-shaped trajectory with five fingers on the screen at the same time.
- the rotation action command may be the same continuous action as the previously input touch command, that is, after the five-finger touch operation is input, the rotation action may be directly input by sliding the arc-shaped trajectory.
- the rotation action command may also be two discrete actions with the previously input touch command. For example, the touch command may be input by a five-finger touch action, and the rotation action command may be displayed after the five-finger touch action triggers the display of the prompt UI interface. , and then enter the five-finger rotation action.
- the controller 250 can determine the rotation direction corresponding to the rotation action instruction according to the time series change characteristics of the coordinates of the touch point. For example, when new coordinates of the touch point are continuously added in a clockwise direction based on the coordinates of the initial touch point in the rotation action instruction, the rotation direction is determined to be the clockwise direction.
- the controller 250 then controls the rotation component 276 to rotate the display 275 in the same direction as the rotation direction according to the rotation direction, so as to adjust the rotation state of the display 275 .
- the controller 250 may first detect the current rotational state of the display 275, and the adjacent rotational states of the current rotational state.
- the adjacent states are the two rotation states with the smallest angle difference from the current rotation state among other rotation states different from the current rotation state. For example, if the current rotation state is the landscape state, the two rotation states with the smallest angle difference are "+90° portrait state" and "-90° portrait state” respectively.
- the rotation component 276 is controlled to rotate, so as to rotate the display 275 to the adjacent state in the corresponding rotation direction. That is, if the rotating action direction is clockwise, the rotating component 276 is controlled to rotate the display 275 clockwise to the adjacent rotating state. For example, the display 275 is rotated 90° clockwise from the landscape state to the "+90° portrait state”. Similarly, if the rotation action direction is counterclockwise rotation, control the rotation component to rotate the display counterclockwise to the adjacent rotation state. For example, the display 275 is rotated 90° counterclockwise from the landscape state to the "-90° portrait state".
- the rotation control method further includes:
- the controller 250 can extract the bending angle corresponding to the touch point trajectory in the rotation action instruction.
- the bending angle can be the bending angle of the trajectory graphic, or the judgment angle corresponding to the trajectory graphic.
- the bending angle can be the center of the circle corresponding to the arc trajectory. Horn.
- the bending angle can be compared with the preset starting angle threshold, so that when the bending angle is greater than or equal to the preset starting threshold, a control command is sent to the rotating component 276 to start the rotating component 276 to rotate.
- a control command is sent to the rotating component 276 to start the rotating component 276 to rotate.
- it can be set that when the rotation angle exceeds 20 degrees, the rotation is started, and when the bending angle of the touch point trajectory is less than 20 degrees, the rotation component 276 is not started to rotate, and the prompt UI interface is still displayed on the display 275 to guide the user to continue.
- Rotate to activate rotation assembly 276 When the bending angle of the track of the touch point is greater than or equal to 20°, a control instruction is sent to the rotation component 276 to start the rotation of the rotation component 276 .
- the rotation component 276 can be activated only when the rotation angle corresponding to the rotation action command is relatively large by detecting the bending angle of the touch point trajectory and comparing it with the preset activation threshold. Rotation is performed to alleviate the user's misoperation, that is, to prevent frequent triggering of the rotation action. At the same time, by prompting the UI screen, the user can also be guided to accurately complete the input of the rotation action instruction, so that when the user wants to perform the rotation operation, the rotation can be completed smoothly.
- the rotational state further includes a tilted state.
- the display device 200 will not remain in a tilted state for a long time under normal use, and the tilted state is usually caused by abnormal conditions, such as a stall phenomenon caused by mechanical hardware failures, foreign objects stuck, etc., that is, the rotating assembly 276 During the rotation process, the display 275 is not driven to rotate to the predetermined position. The tilted state will affect the user's viewing experience. Therefore, if the current rotation state of the display 275 is a tilted state, the rotation control method further includes:
- the rotating component After acquiring the touch command, the rotating component is controlled to rotate the display to a horizontal screen state or a vertical screen state with the smallest angle difference from the tilted state.
- the controller 250 can detect the current rotation state of the display 275 through a device such as a gravitational angular velocity sensor built in the display device 200 . For example, when it is detected that the current rotation state of the display 275 is tilt+5 degrees, the horizontal screen state or the vertical screen state with the smallest angle difference from the current tilt state can be determined according to the detected tilt angle. It can be seen that the angle difference between the landscape state and the current tilt state is 5 degrees, and the angle difference between the portrait state and the current tilt state is 85 degrees. Therefore, the rotation component 276 can be controlled to rotate 5 degrees counterclockwise to the landscape state.
- the controller 250 may also control the rotation component 276 to rotate the display 275 to the landscape or portrait state according to the rotation direction after acquiring the rotation action instruction. For example, when it is detected that the current rotation state of the display 275 is tilt+5 degrees, and the rotation direction corresponding to the rotation action command input by the user on the screen is clockwise, although the angle difference between the horizontal screen state and the current tilt state is 5 degrees , and still rotate the display 275 85 degrees clockwise to adjust to the vertical screen state.
- the above embodiment can correct the tilt state of the display 275 when it is detected that the rotation state of the display 275 is a tilt state, so as to keep the display 275 in a standard state suitable for viewing by the user. At the same time, the rotation process is not affected.
- some embodiments of the present application further provide a display device 200 , including: a display 275 , a rotation component 276 , a touch component 277 , and a controller 250 .
- the display 275 is configured to present a specific user interface or play screen.
- the rotating assembly 276 is connected to the display 275 and configured to drive the display 275 to rotate, so that the display 275 is in one of various rotation states.
- the touch component 277 is disposed on the screen of the display 275 and is configured to detect the touch command input by the user.
- the display 275 , the rotating component 276 and the touch component 277 are all electrically connected to the controller 250 .
- controller 250 described above is configured to perform the following procedural steps:
- the controller 250 can obtain the touch command for rotating the display 275 through the touch component 277 , and then extract the touch action corresponding to the touch command through the data detected by the touch component 277 , and determine whether the touch action corresponds to the touch command.
- the preset rotation actions are the same, thereby controlling the rotation component 2756 to adjust the rotation state of the display 275 .
- the present application provides a display device 200 and a rotation control method, and the rotation control method can be applied to the display device 200 for adjusting the rotation state of the display 275 in the display device 200 .
- the method can, in response to the touch command, extract the touch action corresponding to the touch command, and compare the touch action with the preset rotation action. If the touch action is the same as the preset rotation action, the rotation component 276 is controlled to adjust the rotation state of the display 275 .
- the method can control the rotation of the rotating component 276 by means of touch interaction, so that when the user's finger presses on the touch component 277, the detection is triggered, and when the detection result conforms to the preset rotating action, the display device 200 is driven to rotate, so that the display device 200 is rotated.
- the user can freely operate the rotation of the display device 200 without relying on peripheral devices such as a remote control.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
本申请实施例提供了一种显示设备及显示方法,显示设备包括:摄像头,用于采集第一深度图像;显示器;分别与显示器及摄像头连接的控制器,控制器被配置为:响应于接收到用户输入的用于指示混合通话的控制信号,向被叫端发送混合通话请求;根据接收到被叫端的确认信号,向服务器发送第一深度图像;根据接收到来自服务器的混合图像,控制显示器显示第一混合图像,其中,第一混合图像包括服务器根据第二人物的深度信息,将第二人物渲染到第一深度图像中得到的深度图像,第二人物为第二深度图像中的人物,第二深度图像为被叫端采集的深度图像。
Description
本申请要求在2020年7月3日提交中国专利局、申请号为202010635659.X申请名称为“显示设备、视频通话方法及服务器”的优先权;其全部内容通过引用结合在本申请中;本申请要求在2020年7月31日提交中国专利局、申请号为202010760662.4申请名称为“一种显示设备及旋转控制方法”的优先权,其全部内容通过引用结合在本公开中。
本申请涉及显示设备技术领域,尤其涉及一种显示设备及显示方法。
在当前快节奏的生活方式下,朋友、家人见面的机会逐渐变少,越来越多地感情联络依托视频通话来进行。目前,具有摄像头的移动设备可通过安装视频通话应用程序实现视频通话,然而,移动设备的显示屏较小,且移动设备通常为手持,导致用户只能看到通话对方人物的头像,通话体验不佳。智能电视通过增加摄像头组件,使得基于电视的视频通话成为可能。智能电视的显示屏较大,且用户与智能电视通常会保持一定距离,使得用户可看到通话对方的更多信息。
发明内容
本申请提供了一种显示设备,该显示设备包括:
摄像头,用于采集第一深度图像;
显示器,用于显示用户界面,及在所述用户界面中显示用于指示在用户界面中项目被选择的选择器;
分别与所述显示器及摄像头连接的控制器,所述控制器被配置为:
响应于接收到用户输入的用于指示混合通话的控制信号,向被叫端发送混合通话请求;
根据接收到所述被叫端的确认信号,向服务器发送所述第一深度图像;
根据接收到来自所述服务器的混合图像,控制所述显示器显示所述第一混合图像,其中,所述第一混合图像包括所述服务器根据第二人物的深度信息,将所述第二人物渲染到所述第一深度图像中得到的深度图像,所述第二人物为第二深度图像中的人物,所述第二深度图像为所述被叫端采集的深度图像。
为了更清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1中示例性示出了根据一些实施例的显示设备与控制装置之间操作场景的示意图;
图2中示例性示出了根据一些实施例的显示设备200的硬件配置框图;
图3中示例性示出了根据一些实施例的控制装置100的硬件配置框图;
图4中示例性示出了根据一些实施例的显示设备200中软件配置示意图;
图5中示例性示出了根据一些实施例的显示设备200中应用程序的图标控件界面显示 示意图;
图6中示例性示出了根据一些实施例的AR混合通话示意图;
图7中示例性示出了根据一些实施例的混合通话交互示意图;
图8中示例性示出了根据一些实施例的视频通话界面示意图;
图9中示例性示出了根据一些实施例的混合通话界面示意图;
图10中示例性示出了根据另一些实施例的混合通话界面示意图;
图11中示例性示出了根据一些实施例的视频通话方法的流程示意图;
图12为本申请一些实施例中一种显示设备的后视图;
图13为本申请一些实施例中控制装置的硬件配置框图;
图14为本申请一些实施例中显示设备的硬件配置框图;
图15为本申请一些实施例中显示设备存储器中操作系统的架构配置框图;
图16A为本申请一些实施例中显示设备的横屏状态的示意图;
图16B为本申请一些实施例中显示设备的竖屏状态的示意图;
图17A为本申请一些实施例中旋转控制方法流程示意图;
图17B为本申请一些实施例中触控旋转过程示意图;
图18A为本申请一些实施例中控制显示提示UI界面的流程示意图;
图18B为本申请一些实施例中提示UI界面示意图;
图19A为本申请一些实施例中确定触控动作与预设旋转动作是否相同的流程示意图;
图19B为本申请一些实施例中触控动作示意图;
图20为本申请一些实施例中控制旋转组件调整显示器的旋转状态的流程示意图;
图21为本申请一些实施例中根据弯曲角度控制旋转组件转动的流程示意图;
图22为本申请一些实施例中一种显示设备的结构示意图。
为使本申请的目的、实施方式和优点更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,所描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
基于本申请描述的示例性实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请所附权利要求保护的范围。此外,虽然本申请中公开内容按照示范性一个或几个实例来介绍,但应理解,可以就这些公开内容的各个方面也可以单独构成一个完整实施方式。
需要说明的是,本申请中对于术语的简要说明,仅是为了方便理解接下来描述的实施方式,而不是意图限定本申请的实施方式。除非另有说明,这些术语应当按照其普通和通常的含义理解。
本申请中说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”等是用于区别类似或同类的对象或实体,而不必然意味着限定特定的顺序或先后次序,除非另外注明(Unless otherwise indicated)。应该理解这样使用的用语在适当情况下可以互换,例如能够根据本申请实施例图示或描述中给出那些以外的顺序实施。
此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖但不排他的包含, 例如,包含了一系列组件的产品或设备不必限于清楚地列出的那些组件,而是可包括没有清楚地列出的或对于这些产品或设备固有的其它组件。
本申请中使用的术语“模块”,是指任何已知或后来开发的硬件、软件、固件、人工智能、模糊逻辑或硬件或/和软件代码的组合,能够执行与该元件相关的功能。
本申请中使用的术语“遥控器”,是指电子设备(如本申请中公开的显示设备)的一个组件,通常可在较短的距离范围内无线控制电子设备。一般使用红外线和/或射频(RF)信号和/或蓝牙与电子设备连接,也可以包括WiFi、无线USB、蓝牙、动作传感器等功能模块。例如:手持式触摸遥控器,是以触摸屏中用户界面取代一般遥控装置中的大部分物理内置硬键。
本申请中使用的术语“手势”,是指用户通过一种手型的变化或手部运动等动作,用于表达预期想法、动作、目的/或结果的用户行为。
图1中示例性示出了根据实施例中显示设备与控制装置之间操作场景的示意图。如图1中示出,用户可通过移动终端300和控制装置100操作显示设备200。
在一些实施例中,控制装置100可以是遥控器,遥控器和显示设备的通信包括红外协议通信或蓝牙协议通信,及其他短距离通信方式等,通过无线或其他有线方式来控制显示设备200。用户可以通过遥控器上按键,语音输入、控制面板输入等输入用户指令,来控制显示设备200。如:用户可以通过遥控器上音量加减键、频道控制键、上/下/左/右的移动按键、语音输入按键、菜单键、开关机按键等输入相应控制指令,来实现控制显示设备200的功能。
在一些实施例中,也可以使用移动终端、平板电脑、计算机、笔记本电脑、和其他智能设备以控制显示设备200。例如,使用在智能设备上运行的应用程序控制显示设备200。该应用程序通过配置可以在与智能设备关联的屏幕上,在直观的用户界面(UI)中为用户提供各种控制。
在一些实施例中,移动终端300可与显示设备200安装软件应用,通过网络通信协议实现连接通信,实现一对一控制操作的和数据通信的目的。如:可以实现用移动终端300与显示设备200建立控制指令协议,将遥控控制键盘同步到移动终端300上,通过控制移动终端300上用户界面,实现控制显示设备200的功能。也可以将移动终端300上显示音视频内容传输到显示设备200上,实现同步显示功能。
如图1中还示出,显示设备200还与服务器400通过多种通信方式进行数据通信。可允许显示设备200通过局域网(LAN)、无线局域网(WLAN)和其他网络进行通信连接。服务器400可以向显示设备200提供各种内容和互动。示例的,显示设备200通过发送和接收信息,以及电子节目指南(EPG)互动,接收软件程序更新,或访问远程储存的数字媒体库。服务器400可以是一个集群,也可以是多个集群,可以包括一类或多类服务器。通过服务器400提供视频点播和广告服务等其他网络服务内容。
显示设备200,可以液晶显示器、OLED显示器、投影显示设备。具体显示设备类型,尺寸大小和分辨率等不作限定,本领技术人员可以理解的是,显示设备200可以根据需要做性能和配置上一些改变。
显示设备200除了提供广播接收电视功能之外,还可以附加提供计算机支持功能的智能网络电视功能,包括但不限于,网络电视、智能电视、互联网协议电视(IPTV)等。
图2中示例性示出了根据示例性实施例中显示设备200的硬件配置框图。
在一些实施例中,显示设备200中包括控制器250、调谐解调器210、通信器220、检测器230、输入/输出接口255、显示器275,音频输出接口285、存储器260、供电电源290、用户接口265、外部装置接口240中的至少一种。
在一些实施例中,显示器275,用于接收源自第一处理器输出的图像信号,进行显示视频内容和图像以及菜单操控界面的组件。
在一些实施例中,显示器275,包括用于呈现画面的显示屏组件,以及驱动图像显示的驱动组件。
在一些实施例中,显示视频内容,可以来自广播电视内容,也可以是说,可通过有线或无线通信协议接收的各种广播信号。或者,可显示来自网络通信协议接收来自网络服务器端发送的各种图像内容。
在一些实施例中,显示器275用于呈现显示设备200中产生且用于控制显示设备200的用户操控UI界面。
在一些实施例中,根据显示器275类型不同,还包括用于驱动显示的驱动组件。
在一些实施例中,显示器275为一种投影显示器,还可以包括一种投影装置和投影屏幕。
在一些实施例中,通信器220是用于根据各种通信协议类型与外部设备或外部服务器进行通信的组件。例如:通信器可以包括Wifi芯片,蓝牙通信协议芯片,有线以太网通信协议芯片等其他网络通信协议芯片或近场通信协议芯片,以及红外接收器中的至少一种。
在一些实施例中,显示设备200可以通过通信器220与外部控制装置100或内容提供设备之间建立控制信号和数据信号发送和接收。
在一些实施例中,用户接口265,可用于接收控制装置100(如:红外遥控器等)红外控制信号。
在一些实施例中,检测器230是显示设备200用于采集外部环境或与外部交互的信号。
在一些实施例中,检测器230包括光接收器,用于采集环境光线强度的传感器,可以通过采集环境光可以自适应性显示参数变化等。
在一些实施例中,检测器230还可以包括图像采集器,如相机、摄像头等,可以用于采集外部环境场景,以及用于采集用户的属性或与用户交互手势,可以自适应变化显示参数,也可以识别用户手势,以实现与用户之间互动的功能。
在一些实施例中,检测器230还可以包括温度传感器等,如通过感测环境温度。
在一些实施例中,显示设备200可自适应调整图像的显示色温。如当温度偏高的环境时,可调整显示设备200显示图像色温偏冷色调,或当温度偏低的环境时,可以调整显示设备200显示图像偏暖色调。
在一些实施例中,检测器230还可声音采集器等,如麦克风,可以用于接收用户的声音。示例性的,包括用户控制显示设备200的控制指令的语音信号,或采集环境声音,用于识别环境场景类型,使得显示设备200可以自适应环境噪声。
在一些实施例中,如图2所示,输入/输出接口255被配置为,可进行控制器250与外部其他设备或其他控制器250之间的数据传输。如接收外部设备的视频信号数据和音频信号数据、或命令指令数据等。
在一些实施例中,外部装置接口240可以包括,但不限于如下:可以高清多媒体接口HDMI接口、模拟或数据高清分量输入接口、复合视频输入接口、USB输入接口、RGB端 口等任一个或多个接口。也可以是上述多个接口形成复合性的输入/输出接口。
在一些实施例中,如图2所示,调谐解调器210被配置为,通过有线或无线接收方式接收广播电视信号,可以进行放大、混频和谐振等调制解调处理,从多个无线或有线广播电视信号中解调出音视频信号,该音视频信号可以包括用户所选择电视频道频率中所携带的电视音视频信号,以及EPG数据信号。
在一些实施例中,调谐解调器210解调的频点受到控制器250的控制,控制器250可根据用户选择发出控制信号,以使的调制解调器响应用户选择的电视信号频率以及调制解调该频率所携带的电视信号。
在一些实施例中,广播电视信号可根据电视信号广播制式不同区分为地面广播信号、有线广播信号、卫星广播信号或互联网广播信号等。或者根据调制类型不同可以区分为数字调制信号,模拟调制信号等。或者根据信号种类不同区分为数字信号、模拟信号等。
在一些实施例中,控制器250和调谐解调器210可以位于不同的分体设备中,即调谐解调器210也可在控制器250所在的主体设备的外置设备中,如外置机顶盒等。这样,机顶盒将接收到的广播电视信号调制解调后的电视音视频信号输出给主体设备,主体设备经过第一输入/输出接口接收音视频信号。
在一些实施例中,控制器250,通过存储在存储器上中各种软件控制程序,来控制显示设备的工作和响应用户的操作。控制器250可以控制显示设备200的整体操作。例如:响应于接收到用于选择在显示器275上显示UI对象的用户命令,控制器250便可以执行与由用户命令选择的对象有关的操作。
在一些实施例中,所述对象可以是可选对象中的任何一个,例如超链接或图标。与所选择的对象有关操作,例如:显示连接到超链接页面、文档、图像等操作,或者执行与所述图标相对应程序的操作。用于选择UI对象用户命令,可以是通过连接到显示设备200的各种输入装置(例如,鼠标、键盘、触摸板等)输入命令或者与由用户说出语音相对应的语音命令。
如图2所示,控制器250包括随机存取存储器251(Random Access Memory,RAM)、只读存储器252(Read-Only Memory,ROM)、视频处理器270、音频处理器280、其他处理器253(例如:图形处理器(Graphics Processing Unit,GPU)、中央处理器254(Central Processing Unit,CPU)、通信接口(Communication Interface),以及通信总线256(Bus)中的至少一种。其中,通信总线连接各个部件。
在一些实施例中,RAM 251用于存储操作系统或其他正在运行中的程序的临时数据。
在一些实施例中,ROM 252用于存储各种系统启动的指令。
在一些实施例中,ROM 252用于存储一个基本输入输出系统,称为基本输入输出系统(Basic Input Output System,BIOS)。用于完成对系统的加电自检、系统中各功能模块的初始化、系统的基本输入/输出的驱动程序及引导操作系统。
在一些实施例中,在收到开机信号时,显示设备200电源开始启动,CPU运行ROM 252中系统启动指令,将存储在存储器的操作系统的临时数据拷贝至RAM 251中,以便于启动或运行操作系统。当操作系统启动完成后,CPU再将存储器中各种应用程序的临时数据拷贝至RAM 251中,然后,以便于启动或运行各种应用程序。
在一些实施例中,CPU处理器254,用于执行存储在存储器中操作系统和应用程序指令。以及根据接收外部输入的各种交互指令,来执行各种应用程序、数据和内容,以便最 终显示和播放各种音视频内容。
在一些示例性实施例中,CPU处理器254,可以包括多个处理器。多个处理器可包括一个主处理器以及一个或多个子处理器。主处理器,用于在预加电模式中执行显示设备200一些操作,和/或在正常模式下显示画面的操作。一个或多个子处理器,用于在待机模式等状态下一种操作。
在一些实施例中,图形处理器253,用于产生各种图形对象,如:图标、操作菜单、以及用户输入指令显示图形等。包括运算器,通过接收用户输入各种交互指令进行运算,根据显示属性显示各种对象。以及包括渲染器,对基于运算器得到的各种对象,进行渲染,上述渲染后的对象用于显示在显示器上。
在一些实施例中,视频处理器270被配置为将接收外部视频信号,根据输入信号的标准编解码协议,进行解压缩、解码、缩放、降噪、帧率转换、分辨率转换、图像合成等视频处理,可得到直接可显示设备200上显示或播放的信号。
在一些实施例中,视频处理器270,包括解复用模块、视频解码模块、图像合成模块、帧率转换模块、显示格式化模块等。
其中,解复用模块,用于对输入音视频数据流进行解复用处理,如输入MPEG-2,则解复用模块进行解复用成视频信号和音频信号等。
视频解码模块,则用于对解复用后的视频信号进行处理,包括解码和缩放处理等。
图像合成模块,如图像合成器,其用于将图形生成器根据用户输入或自身生成的GUI信号,与缩放处理后视频图像进行叠加混合处理,以生成可供显示的图像信号。
帧率转换模块,用于对转换输入视频帧率,如将60Hz帧率转换为120Hz帧率或240Hz帧率,通常的格式采用如插帧方式实现。
显示格式化模块,则用于将接收帧率转换后视频输出信号,改变信号以符合显示格式的信号,如输出RGB数据信号。
在一些实施例中,图形处理器253可以和视频处理器可以集成设置,也可以分开设置,集成设置的时候可以执行输出给显示器的图形信号的处理,分离设置的时候可以分别执行不同的功能,例如GPU+FRC(Frame Rate Conversion))架构。
在一些实施例中,音频处理器280,用于接收外部的音频信号,根据输入信号的标准编解码协议,进行解压缩和解码,以及降噪、数模转换、和放大处理等处理,得到可以在扬声器中播放的声音信号。
在一些实施例中,视频处理器270可以包括一颗或多颗芯片组成。音频处理器,也可以包括一颗或多颗芯片组成。
在一些实施例中,视频处理器270和音频处理器280,可以单独的芯片,也可以于控制器一起集成在一颗或多颗芯片中。
在一些实施例中,音频输出,在控制器250的控制下接收音频处理器280输出的声音信号,如:扬声器286,以及除了显示设备200自身携带的扬声器之外,可以输出至外接设备的发声装置的外接音响输出端子,如:外接音响接口或耳机接口等,还可以包括通信接口中的近距离通信模块,例如:用于进行蓝牙扬声器声音输出的蓝牙模块。
供电电源290,在控制器250控制下,将外部电源输入的电力为显示设备200提供电源供电支持。供电电源290可以包括安装显示设备200内部的内置电源电路,也可以是安装在显示设备200外部电源,在显示设备200中提供外接电源的电源接口。
用户接口265,用于接收用户的输入信号,然后,将接收用户输入信号发送给控制器250。用户输入信号可以是通过红外接收器接收的遥控器信号,可以通过网络通信模块接收各种用户控制信号。
在一些实施例中,用户通过控制装置100或移动终端300输入用户命令,用户输入接口则根据用户的输入,显示设备200则通过控制器250响应用户的输入。
在一些实施例中,用户可在显示器275上显示的图形用户界面(GUI)输入用户命令,则用户输入接口通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口通过传感器识别出声音或手势,来接收用户输入命令。
在一些实施例中,“用户界面”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面常用的表现形式是图形用户界面(Graphic User Interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
存储器260,包括存储用于驱动显示设备200的各种软件模块。如:第一存储器中存储的各种软件模块,包括:基础模块、检测模块、通信模块、显示控制模块、浏览器模块、和各种服务模块等中的至少一种。
基础模块用于显示设备200中各个硬件之间信号通信、并向上层模块发送处理和控制信号的底层软件模块。检测模块用于从各种传感器或用户输入接口中收集各种信息,并进行数模转换以及分析管理的管理模块。
例如,语音识别模块中包括语音解析模块和语音指令数据库模块。显示控制模块用于控制显示器进行显示图像内容的模块,可以用于播放多媒体图像内容和UI界面等信息。通信模块,用于与外部设备之间进行控制和数据通信的模块。浏览器模块,用于执行浏览服务器之间数据通信的模块。服务模块,用于提供各种服务以及各类应用程序在内的模块。同时,存储器260还用存储接收外部数据和用户数据、各种用户界面中各个项目的图像以及焦点对象的视觉效果图等。
图3示例性示出了根据示例性实施例中控制装置100的配置框图。如图3所示,控制装置100包括控制器110、通信接口130、用户输入/输出接口、存储器、供电电源。
控制装置100被配置为控制显示设备200,以及可接收用户的输入操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起用用户与显示设备200之间交互中介作用。如:用户通过操作控制装置100上频道加减键,显示设备200响应频道加减的操作。
在一些实施例中,控制装置100可是一种智能设备。如:控制装置100可根据用户需求安装控制显示设备200的各种应用。
在一些实施例中,如图1所示,移动终端300或其他智能电子设备,可在安装操控显示设备200的应用之后,可以起到控制装置100类似功能。如:用户可以通过安装应用,在移动终端300或其他智能电子设备上可提供的图形用户界面的各种功能键或虚拟按钮,以实现控制装置100实体按键的功能。
控制器110包括处理器112和RAM 113和ROM 114、通信接口130以及通信总线。控制器用于控制控制装置100的运行和操作,以及内部各部件之间通信协作以及外部和内 部的数据处理功能。
通信接口130在控制器110的控制下,实现与显示设备200之间控制信号和数据信号的通信。如:将接收到的用户输入信号发送至显示设备200上。通信接口130可包括WiFi芯片131、蓝牙模块132、NFC模块133等其他近场通信模块中至少之一种。
用户输入/输出接口140,其中,输入接口包括麦克风141、触摸板142、传感器143、按键144等其他输入接口中至少一者。如:用户可以通过语音、触摸、手势、按压等动作实现用户指令输入功能,输入接口通过将接收的模拟信号转换为数字信号,以及数字信号转换为相应指令信号,发送至显示设备200。
输出接口包括将接收的用户指令发送至显示设备200的接口。在一些实施例中,可以红外接口,也可以是射频接口。如:红外信号接口时,需要将用户输入指令按照红外控制协议转化为红外控制信号,经红外发送模块进行发送至显示设备200。再如:射频信号接口时,需将用户输入指令转化为数字信号,然后按照射频控制信号调制协议进行调制后,由射频发送端子发送至显示设备200。
在一些实施例中,控制装置100包括通信接口130和输入输出接口140中至少一者。控制装置100中配置通信接口130,如:WiFi、蓝牙、NFC等模块,可将用户输入指令通过WiFi协议、或蓝牙协议、或NFC协议编码,发送至显示设备200。
存储器190,用于在控制器的控制下存储驱动和控制控制设备200的各种运行程序、数据和应用。存储器190,可以存储用户输入的各类控制信号指令。
供电电源180,用于在控制器的控制下为控制装置100各元件提供运行电力支持。可以电池及相关控制电路。
在一些实施例中,系统可以包括内核(Kernel)、命令解析器(shell)、文件系统和应用程序。内核、shell和文件系统一起组成了基本的操作系统结构,它们让用户可以管理文件、运行程序并使用系统。上电后,内核启动,激活内核空间,抽象硬件、初始化硬件参数等,运行并维护虚拟内存、调度器、信号及进程间通信(IPC)。内核启动后,再加载Shell和用户应用程序。应用程序在启动后被编译成机器码,形成一个进程。
参见图4,在一些实施例中,将系统分为四层,从上至下分别为应用程序(Applications)层(简称“应用层”),应用程序框架(Application Framework)层(简称“框架层”),安卓运行时(Android runtime)和系统库层(简称“系统运行库层”),以及内核层。
在一些实施例中,应用程序层中运行有至少一个应用程序,这些应用程序可以是操作系统自带的窗口(Window)程序、系统设置程序、时钟程序、相机应用等;也可以是第三方开发者所开发的应用程序,比如嗨见程序、K歌程序、魔镜程序等。在具体实施时,应用程序层中的应用程序包不限于以上举例,实际还可以包括其它应用程序包,本申请实施例对此不做限制。
框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。应用程序框架层相当于一个处理中心,这个中心决定让应用层中的应用程序做出动作。应用程序通过API接口,可在执行中访问系统中的资源和取得系统的服务。
如图4所示,本申请实施例中应用程序框架层包括管理器(Managers),内容提供者(Content Provider)等,其中管理器包括以下模块中的至少一个:活动管理器(Activity Manager)用与和系统中正在运行的所有活动进行交互;位置管理器(Location Manager) 用于给系统服务或应用提供了系统位置服务的访问;文件包管理器(Package Manager)用于检索当前安装在设备上的应用程序包相关的各种信息;通知管理器(Notification Manager)用于控制通知消息的显示和清除;窗口管理器(Window Manager)用于管理用户界面上的括图标、窗口、工具栏、壁纸和桌面部件。
在一些实施例中,活动管理器用于:管理各个应用程序的生命周期以及通常的导航回退功能,比如控制应用程序的退出(包括将显示窗口中当前显示的用户界面切换到系统桌面)、打开、后退(包括将显示窗口中当前显示的用户界面切换到当前显示的用户界面的上一级用户界面)等。
在一些实施例中,窗口管理器用于管理所有的窗口程序,比如获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕,控制显示窗口变化(例如将显示窗口缩小显示、抖动显示、扭曲变形显示等)等。
在一些实施例中,系统运行库层为上层即框架层提供支撑,当框架层被使用时,安卓操作系统会运行系统运行库层中包含的C/C++库以实现框架层要实现的功能。
在一些实施例中,内核层是硬件和软件之间的层。如图4所示,内核层至少包含以下驱动中的至少一种:音频驱动、显示驱动、蓝牙驱动、摄像头驱动、WIFI驱动、USB驱动、HDMI驱动、传感器驱动(如指纹传感器,温度传感器,触摸传感器、压力传感器等)等。
在一些实施例中,内核层还包括用于进行电源管理的电源驱动模块。
在一些实施例中,图4中的软件架构对应的软件程序和/或模块存储在图2或图3所示的第一存储器或第二存储器中。
在一些实施例中,以魔镜应用(拍照应用)为例,当遥控接收装置接收到遥控器输入操作,相应的硬件中断被发给内核层。内核层将输入操作加工成原始输入事件(包括输入操作的值,输入操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,根据焦点当前的位置识别该输入事件所对应的控件以及以该输入操作是确认操作,该确认操作所对应的控件为魔镜应用图标的控件,魔镜应用调用应用框架层的接口,启动魔镜应用,进而通过调用内核层启动摄像头驱动,实现通过摄像头捕获静态图像或视频。
在一些实施例中,对于具备触控功能的显示设备,以分屏操作为例,显示设备接收用户作用于显示屏上的输入操作(如分屏操作),内核层可以根据输入操作产生相应的输入事件,并向应用程序框架层上报该事件。由应用程序框架层的活动管理器设置与该输入操作对应的窗口模式(如多窗口模式)以及窗口位置和大小等。应用程序框架层的窗口管理根据活动管理器的设置绘制窗口,然后将绘制的窗口数据发送给内核层的显示驱动,由显示驱动在显示屏的不同显示区域显示与之对应的应用界面。
在一些实施例中,如图5中所示,应用程序层包含至少一个应用程序可以在显示器中显示对应的图标控件,如:直播电视应用程序图标控件、视频点播应用程序图标控件、媒体中心应用程序图标控件、应用程序中心图标控件、游戏应用图标控件等。
在一些实施例中,直播电视应用程序,可以通过不同的信号源提供直播电视。例如,直播电视应用程可以使用来自有线电视、无线广播、卫星服务或其他类型的直播电视服务的输入提供电视信号。以及,直播电视应用程序可在显示设备200上显示直播电视信号的视频。
在一些实施例中,视频点播应用程序,可以提供来自不同存储源的视频。不同于直播电视应用程序,视频点播提供来自某些存储源的视频显示。例如,视频点播可以来自云存储的服务器端、来自包含已存视频节目的本地硬盘储存器。
在一些实施例中,媒体中心应用程序,可以提供各种多媒体内容播放的应用程序。例如,媒体中心,可以为不同于直播电视或视频点播,用户可通过媒体中心应用程序访问各种图像或音频所提供服务。
在一些实施例中,应用程序中心,可以提供储存各种应用程序。应用程序可以是一种游戏、应用程序,或某些和计算机系统或其他设备相关但可以在智能电视中运行的其他应用程序。应用程序中心可从不同来源获得这些应用程序,将它们储存在本地储存器中,然后在显示设备200上可运行。
在一些实施例中的硬件或软件架构可以基于上述实施例中的介绍,在一些实施例中可以是基于相近的其他硬件或软件架构,可以实现本申请的技术方案即可。
第一方面:
在一些实施例中,显示设备200的图像采集器可包括摄像头,用户可通过显示设备200上安装的视频通话类应用程序,与使用另一个显示设备200的用户进行视频通话。相关技术中,显示设备200显示的通话界面包括两个窗口,视频通话的双方设备采集的图像分别显示在通话界面不同的窗口中。然而,在上述视频通话场景下,两个窗口中人物的背景通常不同,视频通话的双方人物分别处于两个不同背景下,不能破除显示设备200的屏幕硬边界限制,用户体验不佳。
为解决上述技术问题,本申请基于AR技术,提供了一种混合通话方案,该混合通话方案基于显示设备200的摄像头为3D摄像头模组,可实现AR混合通话。在一些实施例中,3D摄像头模组可包括3D摄像头和其他摄像头,如广角摄像头、微距摄像头、主摄像头等摄像头;在另一些实施例中,3D摄像头模组也可仅包括3D摄像头。
参见图6,为根据一些实施例的AR混合通话示意图。如图6所示,两个显示设备200的3D摄像头模组分别采集深度图像,并将各自采集的深度图像上传到服务器,服务器可根据两个深度图像,将视频通话的双方人物混合在同一背景中,进而使两个显示设备200都能显示双方人物在同一背景下的图像,提升了视频聊天体验。
下面对混合通话的过程做进一步描述。参见图7,为根据一些实施例的混合通话交互示意图。如图7所示,在一些实施例中,当双方用户通过两个显示设备200开启视频通话后,主叫端和被叫端可通过服务器进行混合通话,其中,发出混合通话请求的显示设备200可称为主叫端,接收混合通话请求的显示设备200可称为被叫端。
在一些实施例中,视频通话应用程序还具有语音通话功能以及语音通话切换为视频通话功能,因此,本申请实施例提供的混合通话方案,也可适用于语音通话场景,使用户从语音通话切换为混合通话。
参见图8,为根据一些实施例的视频通话界面示意图,如图8所示,视频通话界面包括两个窗口,其中一个窗口显示主叫端的人物及背景,另一个窗口显示被叫端的人物及背景,为便于区分,主叫端的用户可称为第一人物,被叫端的用户可称为第二人物,第一人物和第二人物的背景通常不相同,图8中,横条纹用于表示第一人物的背景,竖条纹用于表示第二人物的背景,实际实施中,横条纹通常为第一人物所在的环境,竖条纹通常为第二人物所在的环境。
在一些实施例中,显示设备200的控制器在视频通话应用程序启动后,可查询是否支持AR混合通话。根据视频通话应用程序具备3D摄像头模组的启用条件,可判定支持AR混合通话。启用条件可包括显示设备200具备3D摄像头模组,视频通话应用程序具有3D摄像头模组的使用权限,以及3D摄像头模组工作正常等条件。如果显示设备200检测到视频通话应用程序具备3D摄像头模组的启用条件,可如图8所示,控制显示器在视频通话界面显示混合通话控件。混合通话控件的名称可为“AR通话”,混合通话控件的触发方式可为语音触发、单击触发等方式,混合通话控件的触发信号可为用于指示混合通话的控制信号,当然,用于指示混合通话的控制信号还可为其他信号,如预设的手势信号、屏幕任意位置的双击信号等信号。
在一些实施例中,显示设备200的控制器在视频通话应用程序启动后,可直接如图8所示显示混合通话控件,在接收到用于指示混合通话的控制信号后,再检测视频通话应用程序是否具备3D摄像头模组的启用条件。
用户在显示设备200上通过单击混合通话控件等方式在显示设备200上输入用于指示混合通话的控制信号后,则该显示设备200成为主叫端,该用户成为第一用户。
由于在一些实施例中,主叫端在检测到支持AR混合通话后再显示混合通话控件,因此,在接收到用于指示混合通话的控制信号后,主叫端可直接生成混合通话请求,通过服务器将混合通话请求发送到被叫端,节省了3D摄像头模组的检测时间;而在一些实施例中,主叫端未在接收到用于指示混合通话的控制信号之前没有检测是否支持AR混合通话,而上述启用条件也可能随时发生变化,如用户关闭了3D摄像头模组的使用权限,因此,主叫端可在用户输入用于指示混合通话的控制信号后,检测是否支持AR混合通话,确保主叫端的3D摄像头模组能正常启用,主叫端在检测到3D摄像头模组能正常启用后,生成混合通话请求,将混合通话发送到服务器,服务器可将混合通话请求发送到被叫端,从而查询被叫端是否支持及接受AR混合通话。
被叫端接收到混合通话请求后,可查询是否支持AR混合通话。根据视频通话应用程序具备3D摄像头模组的启用条件,可判定支持AR混合通话。如果视频通话应用程序不支持AR混合通话,可向服务器反馈不支持AR混合通话的信号,服务器将该不支持AR混合通话的信号发送到主叫端,供主叫端显示对方不支持AR混合通话的提示信息。如果视频通话应用程序支持AR混合通话,则生成第二提示信息,并控制显示器显示第二提示信息。第二提示信息可包括提示框和选择控件,提示框的内容可为提示是否接受混合通话的信息,如“确认进行混合通话?”,选择控件的数量可为两个,一个响应于触发时表明被叫端的用户接受混合通话,另一个响应于触发时表明被叫端的用户拒绝混合通话。
当被叫端接收到用户输入的对应所述第二提示信息控制信号,且该控制信号为拒绝混合通话的信号后,被叫端生成拒绝信号,将拒绝信号发送至服务器,服务器可将该拒绝信号转发至主叫端。
主叫端根据接收到拒绝信号,生成并控制显示器显示第三提示信息。第三提示信息可包括提示框,提示框的内容可为提示对方拒绝混合通话的信息,如“对方已拒绝混合通话”。
当被叫端接收到用户输入的对应所述第二提示信息控制信号,且该控制信号为接受混合通话的信号后,被叫端生成确认信号,将确认信号发送至服务器。
在一些实施例中,服务器可直接将确认信号转发到主叫端,主叫端可根据接收到确认信号,控制3D摄像头模组采集第一深度图像,将第一深度图像发送给服务器。
在一些实施例中,在主叫端发出混合通话请求之前,双方用户进行的通话为语音通话,如果用户误触了混合通话控件,而对方接受了混合通话请求,主叫端根据确认信号直接启动3D摄像头模组可能会导致主叫端的隐私被暴露,或者,主叫端没有误触混合通话控件,确实希望建立混合通话连接,但是主叫端没有做好打开摄像头的准备,为保护主叫端的隐私,服务器可根据被叫端的确认信号,发送第一提示信号到主叫端,主叫端根据接收到第一提示信号,生成并控制显示器显示第一提示信息。第一提示信息可包括提示框和选择控件,提示框的内容可为提示是否接受混合通话的信息,如“确认进行混合通话?”,选择控件的数量可为两个,一个响应于触发时表明主叫端的用户确认进行混合通话,另一个响应于触发时表明主叫端的用户取消混合通话。当主叫端的用户触发表示“确认进行混合通话”的选择控件后,主叫端的视频通话应用程序控制3D摄像头模组采集第一深度图像,将第一深度图像发送给服务器。
第一深度图像可包括含有深度信息的点云。在一些实施例中,主叫端的视频通话应用程序根据第一深度图像、麦克风采集的音频和3D摄像头模组的其他摄像头采集的视频生成混合流发送到服务器,供服务器进行音视频处理,如人像背景虚化、人像美颜、音效设置等处理。
在一些实施例中,服务器可在接收到主叫端发送的第一深度图像后,发送人物深度图像请求到被叫端,以请求被叫端的人物的空间信息,即深度信息。被叫端可根据接收到人物深度图像请求,控制3D摄像头模组采集第二深度图像,第二深度图像可包括含有深度信息的点云,被叫端可从第二深度图像中提取出第二人物的深度信息,即人物空间分割信息,将其发送到服务器。其中,从第二深度图像中提取第二人物的深度信息的方法包括:利用人体识别算法对第二深度图像进行人体识别,识别出第二人物在第二深度图像中的位置;根据第二人物在第二深度图像中的位置进行背景分割,以将第二人物的深度信息从第二深度图像中剥离出来,从而得到第二人物的深度信息。
在一些实施例中,服务器可在接收到主叫端发送的第一深度图像后,发送深度图像请求到被叫端,以请求被叫端提供被叫端的深度信息。被叫端可根据接收到深度图像请求,控制3D摄像头模组采集第二深度图像,将第二深度图像发送到服务器,由服务器从第二深度图像中提取出第二人物的深度信息。
服务器可根据第二人物的深度信息和第一深度图像的深度信息,将第二人物渲染到第一深度图像中,得到第一混合图像,将第一混合图像分别发送至被叫端和主叫端。主叫端和被叫端接收到第一混合图像后,分别控制各自的显示器显示第一混合图像。参见图9,为根据一些实施例示出的混合通话界面示意图。如图9所示,在第一混合图像中,第一人物和第二人物均处于相同的背景中,且背景为第一人物的真实背景。
在一些实施例中,服务器可对第一混合图像进行音视频处理,得到AR混合流,将AR混合流分别发送至主叫端和被叫端,供主叫端和被叫端展示处理后的第一混合图像和音频。
在一些实施例中,混合通话界面可设置有切换背景控件,如图9所示,该控件名称可为“切换背景”,当主叫端的用户或被叫端的用户触发切换背景控件时,服务器可将第一混合图像切换为图10所示的第二混合图像,第二混合图像的背景为第二人物的真实背景,以主叫端的用户触发切换背景控件为例,切换背景的具体过程如下:
当主叫端的用户在主叫端通过单击切换背景控件等方式在主叫端输入用于指示切换背景的控制信号后,主叫端响应于接收到该用于指示切换背景的控制信号,向服务器发送 切换背景请求。
由于在一些实施例中,被叫端向服务器发送了第二深度图像,而在一些实施例中,被叫端仅向服务器发送了第二人物的深度信息,而切换至第二人物的背景需要第二人物的背景深度信息。因此,服务器可判断是否具有第二人物的背景深度信息,如果具有第二人物的背景深度信息,服务器可从第一深度图像中提取出第一人物的深度信息,提取方法与从第二深度图像中提取第二人物的深度信息的方法相同,将第一人物渲染到第二深度图像中,得到第二混合图像;如果不具有第二人物的背景深度信息,服务器可向被叫端发送深度图像请求,以请求被叫端提供第二深度图像,进而从第一深度图像中提取出第一人物的深度信息,将第一人物渲染到第二深度图像中,得到第二混合图像。
服务器生成第二混合图像后,将第二混合图像分别发送至被叫端和主叫端。主叫端和被叫端接收到第二混合图像后,分别控制各自的显示器将第一混合图像切换为第二混合图像。
如图10所示,第二混合图像的界面可保留切换背景控件,供用户选择将第二混合图像切换为第一混合图像。
为对上述混合通话方案做进一步说明,本申请实施例还提供了一种视频通话方法,参见图11,该视频通话方法可包括以下步骤:
步骤S110:将主叫端的混合通话请求发送到被叫端。
服务器在接收到主叫端的混合通话请求后,可将主叫端的混合通话请求发送到被叫端。
步骤S120:根据接收到被叫端的确认信号,获取所述主叫端采集的第一深度图像。
在一些实施例中,服务器在接收到被叫端对应混合通话请求的确认信号后,可将确认信号发送至主叫端,使主叫端根据接收到确认信号,控制3D摄像头模组采集第一深度图像,将第一深度图像发送至服务器。
在一些实施例中,服务器在接收到被叫端对应混合通话请求的确认信号后,可发送第一提示信号到主叫端,使主叫端显示第一提示信息,主叫端在接收到对应第一提示信息的确认信号后,控制3D摄像头模组采集第一深度图像,将第一深度图像发送给服务器。
步骤S130:获取所述被叫端的第二人物的深度信息。
服务器可在接收到第一深度信息后,可向被叫端发送人物深度图像请求,获取第二深度图像中的第二人物的深度信息。
步骤S140:根据所述第二人物的深度信息,将所述第二人物渲染到所述第一深度图像中,得到第一混合图像。
服务器可根据第二人物的深度信息和第一深度图像中的背景深度信息,将第二人物渲染到第一深度图像的适宜位置,如与第一人物的同一水平位置,调整第二人物的大小与第一人物的大小相当,最终合成第一混合图像。
步骤S150:向所述主叫端和被叫端分别发送所述第一混合图像。
服务器向主叫端和被叫端分别发送第一混合图像,供主叫端和被叫端在各自的显示器上显示第一混合图像。
进一步的,服务器还可接收来自主叫端或被叫端的切换背景请求,将第一混合图像切换为第二混合图像,或将第二混合图像重新切换为第一混合图像。
本申请实施例还提供一种服务器,可用于执行上述视频通话方法。
由上述实施例可见,本申请实施例通过3D摄像头模组采集通话双方的深度信息,根 据通话双方的深度信息将其中一方的人物渲染到另一方的深度图像中,实现了通话双方在同一真实背景下的实时显示,解决了通话界面上的双方人物处于不同背景的问题,提升了用户的视频通话体验。
第二方面:
旋转电视是一种新型智能电视,主要包括显示器以及旋转组件。其中,显示器通过旋转组件固定在墙壁或支架上,可通过旋转组件调节显示器放置方向,以旋转适应不同宽高比的显示画面。例如,多数情况下显示器横向放置,以显示宽高比为16:9、18:9等比例的视频画面。当视频画面的宽高比为9:16、9:18等比例时,横向放置的显示器需要对画面进行缩放,且在显示器的两侧显示黑色区域。因此,可以通过旋转组件将显示器旋转成竖向放置状态,以适应9:16、9:18等比例的视频画面。
而随着手机应用的广泛推广,竖向视频和图片占据了大部分的媒体资源,甚至有很多视频服务商会特意定制竖向资源内容。因此,在竖向视频播放和竖向图片浏览场景使用的可旋转显示设备,能够带来更好的用户体验。为了实现显示设备屏幕的旋转,目前领域内的屏幕旋转方式通常围绕着遥控器、语音、播放视频或者手机投屏来驱动。例如,通过按下遥控器上设置的“旋转”键,驱动旋转组件运行。然而,这些驱动旋转的方式需要依赖外接装置配合,操作复杂,并且操作过程依赖UI界面的提示,不具有更直观的交互体验。
为方便用户控制旋转过程,本申请实施例提供了一种显示设备及旋转控制方法,所述旋转控制方法可配置在显示设备中。其中,显示设备可以如旋转电视、计算机、平板电脑等可旋转显示设备。
参见图1,为本申请一些实施例提供的一种显示设备的应用场景图。如图1所示,控制装置100和显示设备200之间可以有线或无线方式进行通信。
其中,控制装置100被配置为控制显示设备200,其可接收用户输入的操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起着用户与显示设备200之间交互的中介作用。如:用户通过操作控制装置100上频道加减键,显示设备200响应频道加减的操作。
控制装置100可以是遥控器100A,包括红外协议通信或蓝牙协议通信,及其他短距离通信方式等,通过无线或其他有线方式来控制显示设备200。用户可以通过遥控器上按键、语音输入、控制面板输入等输入用户指令,来控制显示设备200。如:用户可以通过遥控器上音量加减键、频道控制键、上/下/左/右的移动按键、语音输入按键、菜单键、开关机按键等输入相应控制指令,来实现控制显示设备200的功能。
控制装置100也可以是智能设备,如移动终端100B、平板电脑、计算机、笔记本电脑等。例如,使用在智能设备上运行的应用程序控制显示设备200。该应用程序通过配置可以在与智能设备关联的屏幕上,通过直观的用户界面(UI)为用户提供各种控制。
示例性的,移动终端100B可与显示设备200安装软件应用,通过网络通信协议实现连接通信,实现一对一控制操作的和数据通信的目的。如:可以使移动终端100B与显示设备200建立控制指令协议,通过操作移动终端100B上提供的用户界面的各种功能键或虚拟控件,来实现如遥控器100A布置的实体按键的功能。也可以将移动终端100B上显示的音视频内容传输到显示设备200上,实现同步显示功能。
显示设备200可提供广播接收功能和计算机支持功能的网络电视功能。显示设备可以实施为,数字电视、网络电视、互联网协议电视(IPTV)等。
显示设备200,可以是液晶显示器、有机发光显示器、投影设备。具体显示设备类型、尺寸大小和分辨率等不作限定。
显示设备200还与服务器300通过多种通信方式进行数据通信。这里可允许显示设备200通过局域网(LAN)、无线局域网(WLAN)和其他网络进行通信连接。服务器300可以向显示设备200提供各种内容和互动。示例的,显示设备200可以发送和接收信息,例如:接收电子节目指南(EPG)数据、接收软件程序更新、或访问远程储存的数字媒体库。服务器300可以一组,也可以多组,可以一类或多类服务器。通过服务器300提供视频点播和广告服务等其他网络服务内容。
在一些实施例中,如图12所示,显示设备200包括旋转组件276,控制器250,显示器275,从背板上空隙处伸出的端子接口278以及和背板连接的旋转组件276,旋转组件276可以使显示器275进行旋转。从显示设备正面观看的角度,旋转组件276可以将显示器旋转到竖屏状态,即屏幕竖向的边长大于横向的边长的状态,也可以将屏幕旋转至横屏状态,即屏幕横向的边长大于竖向的边长的状态。
图13中示例性示出了控制装置100的配置框图。如图13所示,控制装置100包括控制器110、存储器120、通信器130、用户输入接口140、用户输出接口150、供电电源160。
图14中示例性示出了显示设备200的硬件配置框图。如图14所示,显示设备200中可以包括调谐解调器210、通信器220、检测器230、外部装置接口240、控制器250、存储器260、用户接口265、视频处理器270、显示器275、旋转组件276、触控组件277、音频处理器280、音频输出接口285、供电电源290。
其中,旋转组件276可以包括驱动电机、旋转轴等部件。其中,驱动电机可以连接控制器250,受控制器250的控制输出旋转角度;旋转轴的一端连接驱动电机的动力输出轴,另一端连接显示器275,以使显示器275可以通过旋转组件276固定安装在墙壁或支架上。
旋转组件276还可以包括其他部件,如传动部件、检测部件等。其中,传动部件可以通过特定传动比,调整旋转组件276输出的转速和力矩,可以为齿轮传动方式;检测部件可以由设置在旋转轴上的传感器组成,例如角度传感器、姿态传感器等。这些传感器可以对旋转组件276旋转的角度等参数进行检测,并将检测的参数发送给控制器250,以使控制器250能够根据检测的参数判断或调整显示设备200的状态。实际应用中,旋转组件276可以包括但不限于上述部件中的一种或多种。
触控组件277,触控组件277可以布置在显示器275的显示屏幕上,以检测用户的触控动作。控制器250可以通过触控组件277获取用户输入的触控指令,并根据不同的触控指令响应不同的控制动作。
其中,用户输入的触控指令可以按照触控指令对应的触控动作的不同,包括多种形式。例如,点击、滑动、长按等。如果触控组件277支持多点触控,还可以进一步增加触控指令的形式,例如,双指点击、双指滑动、双指长按、三指点击、三指滑动……等等。不同形式的触控指令可以代表不同的控制动作。例如,在一个应用图标上执行的点击动作,可以代表启动运行该图标对应的应用程序。
如图14所示,控制器250包括随机存取存储器(RAM)251、只读存储器(ROM)252、图形处理器253、CPU处理器254、通信接口255、以及通信总线256。其中,RAM251、ROM252以及图形处理器253、CPU处理器254通信接口255通过通信总线256相连接。
图15中示例性示出了显示设备200存储器中操作系统的架构配置框图。该操作系统架构从上到下依次是应用层、中间件层和内核层。
应用层,系统内置的应用程序以及非系统级的应用程序都是属于应用层。负责与用户进行直接交互。应用层可包括多个应用程序,如设置应用程序、电子帖应用程序、媒体中心应用程序等。这些应用程序可被实现为Web应用,其基于WebKit引擎来执行,具体可基于HTML5(HyperText Markup Language,超文本标记语言)、层叠样式表(Cascading Style Sheets,CSS)和JavaScript来开发并执行。
中间件层,可以提供一些标准化的接口,以支持各种环境和系统的操作。例如,中间件层可以实现为与数据广播相关的中间件的多媒体和超媒体信息编码专家组
(MHEG),还可以实现为与外部设备通信相关的中间件的DLNA中间件,还可以实现为提供显示设备内各应用程序所运行的浏览器环境的中间件等。
内核层,提供核心系统服务,例如:文件管理、内存管理、进程管理、网络管理、系统安全权限管理等服务。内核层可以被实现为基于各种操作系统的内核,例如,基于Linux操作系统的内核。
内核层也同时提供系统软件和硬件之间的通信,为各种硬件提供设备驱动服务,例如:为显示器提供显示驱动程序、为摄像头提供摄像头驱动程序、为遥控器提供按键驱动程序、为WIFI模块提供WiFi驱动程序、为音频输出接口提供音频驱动程序、为电源管理(PM)模块提供电源管理驱动等。
图14中,用户接口265,接收各种用户交互。具体的,用于将用户的输入信号发送给控制器250,或者,将从控制器250的输出信号传送给用户。示例性的,遥控器100A可将用户输入的诸如电源开关信号、频道选择信号、音量调节信号等输入信号发送至用户接口265,再由用户接口265转送至控制器250;或者,遥控器100A可接收经控制器250处理从用户接口265输出的音频、视频或数据等输出信号,并且显示接收的输出信号或将接收的输出信号输出为音频或振动形式。
本申请提供的技术方案中,显示设备200的旋转操作是指由旋转组件276带动显示器275完成的角度调节过程,以使显示器275的放置角度发生变化的过程。通常,旋转组件276可以带动显示器275在垂直于地面的竖直面上旋转,从而使显示器275处于不同的旋转状态。
所述旋转状态是显示器275所处的多个特定状态,可以按照显示器275的姿态设定为多种不同的形式,例如,横屏状态、竖屏状态、倾斜状态等。其中,横屏状态和竖屏状态是绝大多数用户所使用的旋转状态,可以分别适用于横屏场景和竖屏场景。因此,在本申请的部分实施例中,横屏状态和竖屏状态可以称为标准状态。而倾斜状态通常是因旋转组件276的故障造成显示器275旋转不到位的状态,用户极少刻意将显示器275旋转至倾斜状态。即在本申请的部分实施例中,倾斜状态也可以被称为非标准状态。
在一些实施例中,不同的旋转状态下显示器275上所呈现的显示内容可以不同, 其不同之处可体现在具体的播放画面内容、UI界面布局等。例如,如图16A所示的在本申请一些实施例中显示设备200的横屏状态。显示器275处于横屏状态时的操作模式可以称为横屏媒资观看模式,显示器275处于竖屏状态时的操作模式可以称为竖屏媒资观看模式。
其中,旋转组件276能够将显示设备200进行固定,并能够在控制器250的控制下,带动显示器275旋转,以使显示器275处于不同的旋转状态。该旋转组件276可以固定在显示器275的背面,旋转组件276用于和墙面固定。旋转组件276可以接收控制器250的控制指令,使显示器275在竖直的平面内进行旋转,使得显示器275处于横屏状态或竖屏状态。
所述横屏状态是指从显示器275正面观看时,所述显示器275水平方向上的长度(宽)大于竖直方向上的长度(高)的状态;所述竖屏状态是指,从显示器275正面观看时,所述显示器275水平方向上的长度(宽)小于竖直方向上的长度(高)的状态。显然,所述竖直方向在本申请中是指大致竖直,水平方向也是指大致水平即可。而横屏状态和竖屏状态以外的其他旋转状态,为倾斜状态。而不同的倾斜状态下,显示器275的旋转角度也是不同的。
在旋转组件276的驱动下,显示器275可以顺时针或逆时针旋转90度,调节显示器275至竖屏状态,如图16B所示。在竖屏状态下,显示器275可以显示竖屏状态对应的用户界面,并拥有竖屏状态相对应的界面布局和交互方式。在竖屏媒资观看模式下,用户可以观看短视频、漫画等竖屏媒资。由于显示设备200中的控制器250进一步与服务器300通信连接,因此可以在竖屏状态时,通过调用服务器300的接口,获取竖屏相应的媒资数据。
需要说明的是,横屏状态主要用于显示电视剧、电影等横向媒资,竖屏状态主要用于显示短视频、漫画等竖向媒资。上述横屏状态和竖屏状态只是两种不相同的显示器状态,并不对显示的内容构成限制,例如,在横屏状态下依然可以显示短视频、漫画等竖向媒资;在竖屏状态下也依然可以显示电视剧、电影等横向媒资,只是在该状态需要对不相符的显示窗口进行压缩、调整。
为了调整显示器275的旋转状态,如图17A所示,本申请的部分实施例中提供一种旋转控制方法,包括以下步骤:
S1:获取用户输入的用于旋转显示器的触控指令。
用户可以通过触控组件277输入用于旋转显示器275的触控指令。具体的触控指令形式可以根据系统UI交互策略为“点击、滑动、长按”中的一种或多种。但为了与其他触控操作方式进行有效区分,减少误操作,用于旋转显示器275的触控指令可以相对于其他操作更具有独特性或复杂性。
例如,根据用户在触摸屏终端上的操作习惯,单指的点击、滑动、长按触控指令通常用于“启动程序”、“移动位置”和“扩展操作”,因此用于旋转显示器275的触控指令可以是多指的点击、滑动、长按等触控指令,以便与单指的点击、滑动、长按触控指令进行区分。
显然,用于旋转显示器275的触控指令可以二指触控、三指触控、四指触控或五指触控中的一种或者多种。例如,可以设置用于旋转显示器275的触控指令为五指触控,也可以设置用于旋转显示器275的触控指令为多指触控指令,即当用户输入二指 触控、三指触控、四指触控和五指触控的指令时,都能够触发旋转显示器275操作,如图17B所示。
在一些实施例中,用于旋转显示器275的触控指令可以包括两个部分动作,即触摸动作部分和旋转动作部分,其中,触摸动作部分用于触发控制器250对触控指令对应的触控动作进行检测,以确定是否启动旋转。旋转动作部分可以在触摸动作部分之后输入,用于辅助判断是否触发旋转以及控制旋转组件276的转动方式,包括转动方向、转动角度等参数的控制。
S2:响应于所述触控指令,提取所述触控指令对应的触控动作。
在获取用于旋转显示器275的触控指令之后,显示设备200的控制器250可以响应于该触控指令,提取触控指令对应的触控动作。
根据预设触控指令所对应的交互动作的不同,实际应用中控制器250对触控指令对应的触控动作的提取方式也不同。当触控指令是一个连续的动作时,控制器250可以直接通过检测触控指令对应的信号数据提取触控动作。例如,操作系统设置用于旋转显示器275的触控指令为,单指在屏幕上画一个“O”形的图案,则控制器250需要用户输入完触控指令后,方能对触控指令中的触控动作进行提取。
而当触控指令包括两个(或多个)部分时,可以在接收到触摸动作部分后,触发检测在触摸动作部分后输入的旋转动作部分。例如,操作系统设置用于旋转显示器275的触控指令为五指转动动作,操作过程中,用户可以先输入五指的触控指令,即通过五指触摸屏幕。控制器250通过触控模块277检测到有五指触摸操作后,可以进一步启动检测程序,检测用户后续在屏幕上输入的旋转动作,并根据具体的旋转动作,执行对显示器275旋转过程的控制。
S3:如果所述触控动作与预设旋转动作相同,控制旋转组件调整所述显示器的旋转状态。
在提取触控动作以后,控制器250还可以将提取的触控动作与预设旋转动作进行对比,如果触控动作与预设旋转动作相同,则确定用户要对显示器275进行旋转,因此可以控制旋转组件276启动运行,以调整显示器275的旋转状态。
例如,当所述旋转状态包括横屏状态和竖屏状态时,在确定触控动作与预设旋转动作相同后,控制器250可以先获取显示器275的当前旋转状态。如果显示器275的当前旋转状态为横屏状态,控制旋转组件276将显示器275调整为竖屏状态;如果显示器275的当前旋转状态为竖屏状态,控制旋转组件276将显示器275调整为横屏状态。
上述控制旋转组件276运行的过程中,可以由控制器250向旋转组件276发送控制指令,使得旋转组件276在接收到控制指令后,根据控制指令进行旋转。其中,控制指令可以包括用于控制旋转组件276进行旋转的一些基本运行参数,例如,旋转方向、旋转角度等。
而控制指令中参数的具体数值可以根据当前旋转状态和具体旋转方式确定,例如,在显示器275处于横屏状态时,如果触控动作与预设旋转动作相同,可以控制旋转组件276顺时针转动90度,以将显示器275调整到竖屏状态;同理,在显示器275处于竖屏状态时,如果触控动作与预设旋转动作相同,可以控制旋转组件276逆时针转动90度,以调整显示器275至横屏状态。
控制指令中参数的具体数值也可以根据用户输入的触控动作确定。例如,如果用户输入的旋转动作为顺时针动作,则可以控制旋转组件276顺时针转动90度至对应的旋转状态。如果用户输入的旋转动作为逆时针动作,则可以控制旋转组件276逆时针转动90度至对应的旋转状态。
由以上技术方案可知,本申请提供的旋转控制方法可以在用户输入触控指令后,通过提取触控指令对应的触控动作,并与预设旋转动作进行对比。从而在触控动作与预设旋转动作相同时,控制旋转组件276启动旋转,以调整显示器275的旋转状态。实现基于手势触摸检测来驱动显示器275的旋转控制方式。
由于触控操作不能像传统的UI界面交互操作一样在具体的UI界面上实施,并通过文字提示以引导用户完成操作,因此在实际应用中用户需要记住触控操作的具体动作,才能够实现对显示器275的旋转操作,这将影响用户体验。为了便于用户记住具体的交互动作,在本申请的部分实施例中,还可以通过显示提示UI界面的方式,引导用户完成触控交互。即如图18A所示,获取用于旋转所述显示器的触控指令的步骤还包括:
检测所述触控指令中的触摸点数量;
如果所述触摸点数量等于预设判断数量,控制所述显示器显示提示UI界面。
在用户输入触控指令后,控制器250可以通过触控组件277检测触控指令中的触摸点数量,并判断触摸点数量是否等于预设判断数量,以便控制显示器275显示提示UI界面。其中,所述提示UI界面中包括用于指示旋转动作的图案和/或文字。预设判断数量可以根据操作系统的实际UI交互规则进行设定,当用户通过触控指令中触摸点数量不同时,也可以展示不同的提示UI界面。
例如,如图18B所示,用户可以通过多指同时触摸屏幕以输入触控指令。当控制器250通过触控组件277检测到用户的多指触摸操作后,可以对触摸操作中的触摸点数量进行检测。如果触摸点数量为5个,等于预设判断值5,则确定当前用户可能输入用于旋转显示器275的触控指令,因此可以控制显示器275显示用于提示旋转操作的UI界面,从而通过提示UI界面上的图案和/或文字提示用户完成后续的旋转动作输入。
提示UI界面中,还可以通过动画、视频等更直观的内容提示用户完成输入。提示UI界面可以半透明的方式显示在显示画面的最上层,并且在用户触摸屏幕的过程中一直保持显示的状态,直至用户完成后续的旋转动作。
另外,为了便于用户记住更多的触控手势动作,还可以在显示设备200的控制系统中设置一个基于统一触控操作的总功能入口。即在任意场景下,用户可以通过设定的触摸动作调用提示UI界面,并且在提示UI界面中通过多个图形、文字或动画的形式,分别指示不同的交互动作,以便用户按照提示UI界面进行输入。
例如,可以定义总功能入口的唤醒动作为五指触摸指令,在用户通过五指触摸屏幕后,显示提示UI界面。提示UI界面中可以直接设置功能入口控件,使用户在点击该控件时,实现对应的功能。提示UI界面中还可以依次展示多个功能对应手势的动画,例如,通过多指旋转的动画表示用户可以通过多指旋转指令启动旋转组件276进行旋转;通过单指下滑的动画表示用户可以通过单指下滑指令查看消息界面等等。
需要说明的是,由于用户在输入触控指令时,手指与触摸屏之间为面接触,即一 个手指与屏幕之间会形成一个连续的接触区域,因此上述触摸点数量可以是指在交互过程中连续的接触区域数量。
由以上技术方案可知,本实施例中通过检测触控指令中的触摸点数量,并且在触摸点数量等于预设判断数量,显示提示UI界面,可以使得用户无需记住众多触控方式,有助于用户更加准确的输入交互动作,以便控制器250判断后执行相应的功能。
对于用户输入的旋转动作,可以按照输入动作的不同具有不同的检测方式。对于需要在屏幕上执行滑动的旋转动作,可以通过判断触摸点的滑动轨迹来确定用户输入的触控动作为哪种动作。即在本申请的部分实施例中,如图19A所示,提取所述触控指令对应的触控动作的步骤包括:
S201:遍历所述触控指令中的触摸点坐标;
S202:根据连续的所述触摸点坐标生成触控动作轨迹;
S203:对比所述触控动作轨迹的形状与预设旋转动作轨迹的形状,生成对比结果,以根据所述对比结果确定所述触控动作与预设旋转动作是否相同。
为了实现对触控输入的检测,可以在显示器275的屏幕范围内构建平面坐标系,使得触控组件277上的任意位置都可以通过构建的平面坐标系进行表示。当用户手指触碰屏幕上的任一位置后,则可以通过触摸点坐标来表示触控位置。而在一段时间内,所检测的连续的触摸点坐标则可以用来表示用户的触控动作轨迹。
显然,所检测的动作轨迹可以是由多个触摸点坐标构成的图形数据。通过对比触控动作轨迹的形状与预设旋转动作轨迹的形状,可以确定触控动作与预设旋转动作是否相同。由于用户输入触控操作时的动作幅度不同,因此提取的触控动作轨迹形状也具有多种形式,而为了确定触控动作与预设旋转动作是否相同,可以直接对轨迹的形状类别进行判断,如果所述触控动作轨迹的形状与预设旋转动作轨迹的形状为同一类形状,确定所述触控动作与预设旋转动作相同。
例如,如图19B所示,如果触控动作轨迹的形状和预设旋转动作轨迹的形状都是“O”形,则确定触控动作与预设旋转动作相同,而用户输入直径较大的“O”形和较小的“O”形都可以判定为输入了预设旋转动作,从而控制旋转组件276带动显示器275进行旋转。同理,如果触控动作轨迹的形状和预设旋转动作轨迹的形状都是“圆形”、“矩形”、“三角形”、“四边形”等,也可以确定触控动作与预设旋转动作相同。
由以上技术方案可知,上述实施例可以通过遍历触控指令中的触摸点坐标,从而检测连续触摸点坐标所组成的触控动作轨迹的形状,并在对比触控动作轨迹的形状与预设旋转动作轨迹的形状是否为同一类形状,从而确定触控动作与预设旋转动作是否相同。本实施例可以提高系统判断过程的容错率,使用户无需输入与预设旋转动作完全相同的动作也能够控制旋转组件的旋转过程。
在本申请的部分实施例中,还可以通过在触控指令后输入的旋转动作指令,控制旋转组件276的动作,即如图20所示,控制所述旋转组件调整所述显示器的旋转状态的步骤还包括:
S310:获取用户输入的旋转动作指令;
S320:响应于所述旋转动作指令,提取旋转动作方向;
S330:控制所述旋转组件按照与所述旋转动作方向相同的方向旋转所述显示器。
其中,旋转动作指令是指用户在屏幕上可以执行的一种滑动动作。以五指触控方 式为例,用户可以通过在屏幕上五个手指同时滑动弧形的轨迹来输入旋转动作指令。旋转动作指令可以和在先输入的触控指令为同一个连续的动作,即在输入五指触摸操作后,可以直接通过滑动弧形的轨迹输入旋转动作。旋转动作指令也可以和在先输入的触控指令为不连续的两个动作,例如,触控指令可以由五指触摸动作输入,而旋转动作指令则可以是在五指触摸动作触发显示提示UI界面后,再输入的五指旋转动作。
在获取旋转动作指令后,控制器250可以通过触摸点坐标的时序变化特点,确定旋转动作指令对应的旋转方向。例如,旋转动作指令中以起始触摸点坐标为基础按照顺时针的方向不断增加新的触摸点坐标时,则确定旋转方向为顺时针方向。
控制器250再根据旋转动作方向,控制旋转组件276按照与旋转动作方向相同的方向旋转显示器275,从而调整显示器275的旋转状态。在一些实施例中,控制器250可以先检测显示器275的当前旋转状态,以及当前旋转状态的邻近旋转状态。其中,邻近状态即与当前旋转状态不同的其他旋转状态中,与当前旋转状态相差角度最小的两种旋转状态。例如,当前为旋转状态为横屏状态,则与之相差角度最小的两个旋转状态分别为“+90°竖屏状态”和“-90°竖屏状态”。
再根据确定的旋转方向,控制旋转组件276进行旋转,从而将显示器275旋转至对应旋转方向上的邻近状态。即,如果旋转动作方向为顺时针旋转,控制旋转组件276顺时针转动显示器275至邻近旋转状态。例如,将显示器275从横屏状态顺时针旋转90°至“+90°竖屏状态”。同理,如果旋转动作方向为逆时针旋转,控制旋转组件逆时针转动显示器至邻近旋转状态。例如,将显示器275从横屏状态逆时针旋转90°至“-90°竖屏状态”。
在一些实施例中,如图21所示,获取用户输入的旋转动作指令的步骤后,所述旋转控制方法还包括:
S311:在所述旋转动作指令中提取触摸点轨迹的弯曲角度;
S312:如果所述弯曲角度大于或等于预设启动阈值,向所述旋转组件发送控制指令,以使所述旋转组件转动;
S313:如果所述弯曲角度小于预设启动阈值,控制所述显示器显示提示UI界面。
控制器250可以在旋转动作指令中提取触摸点轨迹对应的弯曲角度,弯曲的角度可以是轨迹图形折弯角度,也可以是轨迹图形对应的判断角度,例如弯曲角度可以为弧形轨迹对应的圆心角。
在提取触摸点轨迹的弯曲角度后,可以对比弯曲角度与预设的启动角度阈值,实现在弯曲角度大于或等于预设启动阈值时向旋转组件276发送控制指令,使旋转组件276开始转动。例如,可以设定当旋转角度超过20度时,启动旋转,则在触摸点轨迹的弯曲角度小于20°时,不启动旋转组件276转动,仍旧在显示器275上显示提示UI界面,以引导用户继续旋转以启动旋转组件276。而在触摸点轨迹的弯曲角度大于或等于20°时,则向旋转组件276发送控制指令,以使旋转组件276开始转动。
由以上技术方案可知,上述实施例中可以通过检测触摸点轨迹的弯曲角度,并通过与设定预设启动阈值进行对比,从而在旋转动作指令对应的旋转角度较大时,才启动旋转组件276进行转动,缓解用户的误操作,即防止频繁触发旋转动作。同时,通过提示UI画面,还能够引导用户准确完成旋转动作指令的输入,从而在用户想要进行旋转操作时,可以顺利完成旋转。
在本申请的部分实施例中,所述旋转状态还包括倾斜状态。通常,显示设备200在常规使用下,不会长时间保持倾斜状态,倾斜状态通常是由于异常状况所引起的,例如由于机械硬件故障、异物卡顿等原因引起的堵转现象,即旋转组件276在转动过程中并没有带动显示器275旋转至预定位置。倾斜状态将影响用户的观影体验。因此,如果所述显示器275的当前旋转状态为倾斜状态,所述旋转控制方法还包括:
在获取触控指令后,控制旋转组件将显示器旋转至与倾斜状态角度差最小的横屏状态或竖屏状态。
控制器250可以通过显示设备200内置的重力角速度传感器等设备,检测显示器275的当前旋转状态。例如,当检测到显示器275的当前旋转状态为倾斜+5度时,则可以根据检测的倾斜角度确定与当前倾斜状态相差角度最小的横屏状态或竖屏状态。可见,横屏状态与当前倾斜状态的角度差为5度,而竖屏状态与当前倾斜状态的角度差为85度,因此,可以控制旋转组件276逆时针旋转5度至横屏状态。
而为了快速旋转至目标状态,控制器250还可以在获取旋转动作指令后,控制旋转组件276按照旋转动作方向,将显示器275旋转至横屏状态或竖屏状态。例如,当检测到显示器275的当前旋转状态为倾斜+5度,同时用户在屏幕上输入的旋转动作指令对应的旋转方向为顺时针,则虽然横屏状态与当前倾斜状态的角度差为5度,依然按照顺时针方向将显示器275旋转85度,以调节至竖屏状态。
由以上技术方案可知,上述实施例可以在检测到显示器275的旋转状态为倾斜状态时,针对显示器275的倾斜状态进行纠正,从而保持显示器275能够处在适合用户观看的标准状态。同时,不影响旋转过程。
基于上述旋转控制方法,如图22所示,本申请的部分实施例中还提供一种显示设备200,包括:显示器275、旋转组件276、触控组件277以及控制器250。其中,所述显示器275被配置为呈现具体的用户界面或播放画面。所述旋转组件276连接所述显示器275被配置为带动所述显示器275旋转,以使所述显示器275处于多种旋转状态中的一种。所述触控组件277设置在显示器275的屏幕上,被配置为检测用户输入的触控指令。显示器275、旋转组件276以及触控组件277均与控制器250建立电连接。
并且,上述控制器250被配置为执行以下程序步骤:
S1:获取用于旋转所述显示器的触控指令;
S2:响应于所述触控指令,提取所述触控指令对应的触控动作;
S3:如果所述触控动作与预设旋转动作相同,控制所述旋转组件调整所述显示器的旋转状态。
实际应用中,控制器250可以通过触控组件277获取用于旋转显示器275的触控指令,再通过触控组件277检测的数据提取触控指令对应的触控动作,并判断触控动作是否与预设旋转动作相同,从而控制旋转组件2756调整显示器275的旋转状态。
由以上技术方案可知,本申请提供一种显示设备200及旋转控制方法,所述旋转控制方法可应用于所述显示设备200,用于调整显示设备200中显示器275的旋转状态。所述方法在获取用户输入的用于旋转显示器275的触控指令后,可响应于该触控指令,提取触控指令对应的触控动作,并对比触控动作与预设旋转动作。如果触控动作与预设旋转动作相同,控制旋转组件276调整显示器275的旋转状态。
所述方法可以通过触控交互的方式控制旋转组件276旋转,实现当用户手指在触控组件277上按压时,触发检测,并在检测结果符合预设旋转动作时,驱动显示设备200旋转,使用户可以不依赖遥控器等外设,自由操作显示设备200的旋转。
由于以上实施方式均是在其他方式之上引用结合进行说明,不同实施例之间均具有相同的部分,本说明书中各个实施例之间相同、相似的部分互相参见即可。在此不再详细阐述。
需要说明的是,在本说明书中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或暗示这些实体或操作之间存在任何这种实际的关系或顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的电路结构、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种电路结构、物品或者设备所固有的要素。在没有更多限制的情况下,有语句“包括一个……”限定的要素,并不排除在包括要素的电路结构、物品或者设备中还存在另外的相同要素。
本领域技术人员在考虑说明书及实践这里发明的公开后,将容易想到本申请的其他实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由权利要求的内容指出。
以上的本申请实施方式并不构成对本申请保护范围的限定。
Claims (10)
- 一种显示设备,其特征在于,包括:摄像头,用于采集第一深度图像;显示器,用于显示用户界面,及在所述用户界面中显示用于指示在用户界面中项目被选择的选择器;分别与所述显示器及摄像头连接的控制器,所述控制器被配置为:响应于接收到用户输入的用于指示混合通话的控制信号,向被叫端发送混合通话请求;根据接收到所述被叫端的确认信号,向服务器发送所述第一深度图像;根据接收到来自所述服务器的混合图像,控制所述显示器显示所述第一混合图像,其中,所述第一混合图像包括所述服务器根据第二人物的深度信息,将所述第二人物渲染到所述第一深度图像中得到的深度图像,所述第二人物为第二深度图像中的人物,所述第二深度图像为所述被叫端采集的深度图像。
- 根据权利要求1所述的显示设备,其特征在于,所述控制器还被配置为:响应于接收到用户输入的用于指示切换背景的控制信号,向所述服务器发送切换背景请求;根据接收到来自所述服务器的第二混合图像,控制所述显示器将所述第一混合图像切换为第二混合图像,或控制所述显示器将所述第一混合图像切换为第二混合图像,其中,所述第二混合图像包括所述服务器根据第一人物的深度信息,将所述第一人物渲染到第二深度图像中得到的深度图像,所述第一人物为所述第一深度图像中的人物。
- 根据权利要求1所述的显示设备,其特征在于,所述响应于接收到用户输入的用于指示混合通话的控制信号,之前还包括:与被叫端建立视频通话连接;控制显示器在视频通话的用户界面上显示混合通话控件,其中,所述混合通话控件响应于触发时生成所述指示混合通话的控制信号。
- 根据权利要求1所述的显示设备,其特征在于,所述向服务器发送所述第一深度图像,之前还包括:控制所述显示器显示确认混合通话的第一提示信息;接收用户输入的对应所述第一提示信息的控制信号。
- 一种显示设备,其特征在于,包括:摄像头,用于采集第二深度图像;显示器,用于显示用户界面,及在所述用户界面中显示用于指示在用户界面中项目被选择的选择器;分别与所述显示器及摄像头连接的控制器,所述控制器被配置为:响应于接收到主叫端的混合通话请求,控制所述显示器显示请求混合通话的第二提示信息;响应于接收到用户输入的对应所述第二提示信息控制信号,向服务器发送确认信号;根据接收到所述服务器的人物深度图像请求,从所述第二深度图像提取出第二人物的深度信息,向所述服务器发送所述第二人物的深度信息;根据接收到来自所述服务器的第一混合图像,控制所述显示器显示所述第一混合图像, 其中,所述第一混合图像包括所述服务器根据所述第二人物的深度信息,将所述第二人物渲染到所述第一深度图像中得到的深度图像,所述第一深度图像为所述主叫端采集的深度图像。
- 根据权利要求5所述的显示设备,其特征在于,所述控制器还被配置为:向所述服务器发送所述第二深度图像,使所述服务器根据第一人物的深度信息,将所述第一人物渲染到所述第二深度图像中得到第二混合图像,所述第一人物为第一深度图像中的人物,所述第一深度图像为所述主叫端采集的深度图像。
- 根据权利要求5所述的显示设备,其特征在于,所述从所述第二深度图像提取出第二人物的深度信息,包括:对所述第二深度图像进行人体识别,得到所述第二人物在所述第二深度图像中的位置;根据所述第二人物在所述第二深度图像中的位置进行背景分割,得到所述第二人物的深度信息。
- 一种显示方法,其特征在于,包括:将主叫端的混合通话请求发送到被叫端;根据接收到被叫端的确认信号,获取所述主叫端采集的第一深度图像;获取所述被叫端的第二人物的深度信息;根据所述第二人物的深度信息,将所述第二人物渲染到所述第一深度图像中,得到第一混合图像;向所述主叫端和被叫端分别发送所述第一混合图像。
- 根据权利要求8所述的显示方法,其特征在于,还包括:根据接收到切换背景请求,从所述第一深度图像提取出第一人物的深度信息;根据所述第一人物的深度信息,将所述第一人物渲染到所述第二深度图像中,得到第二混合图像;向所述主叫端和被叫端分别发送所述第二混合图像。
- 一种服务器,其特征在于,所述服务器被配置为:将主叫端的混合通话请求发送到被叫端;根据接收到被叫端的确认信号,获取所述主叫端采集的第一深度图像;获取所述被叫端的第二人物的深度信息;根据所述第二人物的深度信息,将所述第二人物渲染到所述第一深度图像中,得到第一混合图像;向所述主叫端和被叫端分别发送所述第一混合图像。
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010635659.XA CN111669662A (zh) | 2020-07-03 | 2020-07-03 | 显示设备、视频通话方法及服务器 |
CN202010635659.X | 2020-07-03 | ||
CN202010760662.4A CN114095766B (zh) | 2020-07-31 | 2020-07-31 | 一种显示设备及旋转控制方法 |
CN202010760662.4 | 2020-07-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022001635A1 true WO2022001635A1 (zh) | 2022-01-06 |
Family
ID=79317375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/099792 WO2022001635A1 (zh) | 2020-07-03 | 2021-06-11 | 一种显示设备及显示方法 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022001635A1 (zh) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6400374B2 (en) * | 1996-09-18 | 2002-06-04 | Eyematic Interfaces, Inc. | Video superposition system and method |
CN101610421A (zh) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | 视频通讯方法、装置及系统 |
JP2013115527A (ja) * | 2011-11-28 | 2013-06-10 | Hitachi Consumer Electronics Co Ltd | テレビ会議システム及びテレビ会議方法 |
CN106067960A (zh) * | 2016-06-20 | 2016-11-02 | 努比亚技术有限公司 | 一种处理视频数据的移动终端和方法 |
CN108055495A (zh) * | 2017-12-14 | 2018-05-18 | 南京美桥信息科技有限公司 | 一种可视虚拟聚会方法和系统 |
CN108076307A (zh) * | 2018-01-26 | 2018-05-25 | 南京华捷艾米软件科技有限公司 | 基于ar的视频会议系统和基于ar的视频会议方法 |
CN108259810A (zh) * | 2018-03-29 | 2018-07-06 | 上海掌门科技有限公司 | 一种视频通话的方法、设备和计算机存储介质 |
CN108933913A (zh) * | 2017-05-24 | 2018-12-04 | 中兴通讯股份有限公司 | 一种视频会议实现方法、装置、系统及计算机存储介质 |
CN109040643A (zh) * | 2018-07-18 | 2018-12-18 | 奇酷互联网络科技(深圳)有限公司 | 移动终端及远程合影的方法、装置 |
CN109040647A (zh) * | 2018-08-31 | 2018-12-18 | 北京小鱼在家科技有限公司 | 媒体信息合成方法、装置、设备及存储介质 |
CN111669662A (zh) * | 2020-07-03 | 2020-09-15 | 海信视像科技股份有限公司 | 显示设备、视频通话方法及服务器 |
-
2021
- 2021-06-11 WO PCT/CN2021/099792 patent/WO2022001635A1/zh active Application Filing
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6400374B2 (en) * | 1996-09-18 | 2002-06-04 | Eyematic Interfaces, Inc. | Video superposition system and method |
CN101610421A (zh) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | 视频通讯方法、装置及系统 |
JP2013115527A (ja) * | 2011-11-28 | 2013-06-10 | Hitachi Consumer Electronics Co Ltd | テレビ会議システム及びテレビ会議方法 |
CN106067960A (zh) * | 2016-06-20 | 2016-11-02 | 努比亚技术有限公司 | 一种处理视频数据的移动终端和方法 |
CN108933913A (zh) * | 2017-05-24 | 2018-12-04 | 中兴通讯股份有限公司 | 一种视频会议实现方法、装置、系统及计算机存储介质 |
US20200186753A1 (en) * | 2017-05-24 | 2020-06-11 | Zte Corporation | Video conference realization method, system, and computer storage medium |
CN108055495A (zh) * | 2017-12-14 | 2018-05-18 | 南京美桥信息科技有限公司 | 一种可视虚拟聚会方法和系统 |
CN108076307A (zh) * | 2018-01-26 | 2018-05-25 | 南京华捷艾米软件科技有限公司 | 基于ar的视频会议系统和基于ar的视频会议方法 |
CN108259810A (zh) * | 2018-03-29 | 2018-07-06 | 上海掌门科技有限公司 | 一种视频通话的方法、设备和计算机存储介质 |
CN109040643A (zh) * | 2018-07-18 | 2018-12-18 | 奇酷互联网络科技(深圳)有限公司 | 移动终端及远程合影的方法、装置 |
CN109040647A (zh) * | 2018-08-31 | 2018-12-18 | 北京小鱼在家科技有限公司 | 媒体信息合成方法、装置、设备及存储介质 |
CN111669662A (zh) * | 2020-07-03 | 2020-09-15 | 海信视像科技股份有限公司 | 显示设备、视频通话方法及服务器 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020244266A1 (zh) | 智能电视的远程控制方法、移动终端和智能电视 | |
CN113395558B (zh) | 一种显示设备及显示画面旋转适配方法 | |
WO2021203530A1 (zh) | 显示设备及电视节目的推送方法 | |
WO2022048203A1 (zh) | 一种输入法控件的操控提示信息的显示方法及显示设备 | |
CN111836115B (zh) | 一种屏保展示方法、屏保跳转方法及显示设备 | |
CN111970548B (zh) | 显示设备及调整摄像头角度的方法 | |
CN113938724A (zh) | 显示设备及录屏分享方法 | |
CN114286152A (zh) | 显示设备、通信终端及投屏画面动态显示方法 | |
CN111901646A (zh) | 一种显示设备及触控菜单显示方法 | |
CN111970549A (zh) | 菜单显示方法和显示设备 | |
CN111669662A (zh) | 显示设备、视频通话方法及服务器 | |
CN111954059A (zh) | 屏保的展示方法及显示设备 | |
CN114157889A (zh) | 一种显示设备及触控协助交互方法 | |
CN113556591A (zh) | 一种显示设备及投屏画面旋转显示方法 | |
WO2022028060A1 (zh) | 一种显示设备及显示方法 | |
CN111984167B (zh) | 一种快捷命名的方法及显示设备 | |
CN112073787B (zh) | 显示设备及首页显示方法 | |
CN113630569A (zh) | 显示设备及显示设备的控制方法 | |
CN112218145A (zh) | 智能电视、vr显示设备以及相关方法 | |
CN112040340A (zh) | 资源文件获取方法及显示设备 | |
CN114079827A (zh) | 菜单显示方法和显示设备 | |
WO2022083357A1 (zh) | 显示设备及摄像头控制的方法 | |
WO2022001635A1 (zh) | 一种显示设备及显示方法 | |
CN111787350B (zh) | 显示设备及视频通话中的截图方法 | |
CN112367550A (zh) | 一种媒资列表多标题动态展示的实现方法及显示设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21833931 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21833931 Country of ref document: EP Kind code of ref document: A1 |