WO2021218473A1 - Display method and display device - Google Patents

Display method and display device Download PDF

Info

Publication number
WO2021218473A1
WO2021218473A1 PCT/CN2021/081562 CN2021081562W WO2021218473A1 WO 2021218473 A1 WO2021218473 A1 WO 2021218473A1 CN 2021081562 W CN2021081562 W CN 2021081562W WO 2021218473 A1 WO2021218473 A1 WO 2021218473A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
angle
initial
panoramic picture
user
Prior art date
Application number
PCT/CN2021/081562
Other languages
French (fr)
Chinese (zh)
Inventor
王大勇
于颜梅
杨鲁明
王卫明
鲍姗娟
陈验方
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010342885.9A external-priority patent/CN113645502B/en
Priority claimed from CN202010559804.0A external-priority patent/CN113825001B/en
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2021218473A1 publication Critical patent/WO2021218473A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • This application relates to the technical field of smart TVs, and in particular to a display method and display device.
  • a panoramic picture is a picture displayed through a wide-angle representation method, and a panoramic picture can express more images of the surrounding environment. Generally, the panoramic picture cannot be completely displayed on the display device. When content other than the current display content of the panoramic picture needs to be browsed, the panoramic picture needs to be adjusted so that other contents of the panoramic picture can be adjusted to the display area of the display device.
  • the remote control when browsing panoramic pictures on a display device, it is usually necessary to use the remote control to adjust the display content of the panoramic picture. For example, press the arrow keys on the remote control to move the panoramic picture in the corresponding direction on the screen of the display device. Display the content that was not displayed before in the opposite direction; or collect the operator’s gestures, and then adjust the displayed panoramic picture content according to the direction of the gesture, for example, the operator’s finger slides to the right on the display device screen, then the panoramic picture It will move to the right on the screen to display the content that was not displayed on the left side of the previous panoramic picture.
  • This application provides a display method and display device.
  • this application provides a panoramic picture browsing method, including:
  • the display device displays a panoramic picture, identifying a target person in front of the display device, where the target person is used to indicate an operator who browses the panoramic picture in front of the display device;
  • this application also provides a display device, including:
  • Detector used to collect the image in front of the display device
  • the controller is configured as:
  • the display device displays a panoramic picture, identifying a target person in front of the display device, where the target person is used to indicate an operator who browses the panoramic picture in front of the display device;
  • FIG. 1 exemplarily shows a schematic diagram of an operation scene between a display device and a control device according to some embodiments
  • FIG. 2 exemplarily shows a hardware configuration block diagram of a display device 200 according to some embodiments
  • FIG. 3 exemplarily shows a block diagram of the hardware configuration of the control device 100 according to some embodiments
  • FIG. 4 exemplarily shows a schematic diagram of software configuration in a display device 200 according to some embodiments
  • FIG. 5 exemplarily shows a schematic diagram of the icon control interface display of the application program in the display device 200 according to some embodiments
  • FIG. 6 is a schematic diagram of interaction between a display device 200 and an operator according to an embodiment of the application
  • FIG. 7 is a flowchart of a panoramic picture browsing method shown in an embodiment of the application.
  • FIG. 8 is a schematic diagram of a detector 230 collecting images according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of the angle of the aircraft shown in an embodiment of the application.
  • FIG. 10 is a schematic diagram of interaction between another display device 200 and an operator according to an embodiment of the application.
  • FIG. 11 is a schematic diagram of the initial face angle of the target person obtained by the controller 110 according to an embodiment of the application.
  • FIG. 12 is a schematic diagram of the current face angle of the target person obtained by the controller 110 according to an embodiment of the application.
  • FIG. 13 exemplarily shows a flowchart of a method for dynamically adjusting a control according to an embodiment
  • FIG. 14 exemplarily shows a flowchart of a method for obtaining the initial position parameter of a target control according to an embodiment
  • FIG. 15 exemplarily shows a schematic diagram of the reference coordinate system according to the embodiment.
  • FIG. 16 exemplarily shows a schematic diagram of environmental image data corresponding to the initial position in the embodiment
  • FIG. 17 exemplarily shows a schematic diagram of environmental image data corresponding to the end position in the embodiment
  • FIG. 18 exemplarily shows a schematic diagram of the position change of the center point of the face frame on the display interface during the movement of the user according to the embodiment
  • FIG. 19 exemplarily shows a flowchart of a method for calculating the offset of a target control according to an embodiment
  • FIG. 20 exemplarily shows a schematic diagram when the theoretical second distance is determined according to the embodiment
  • FIG. 21 exemplarily shows the first schematic diagram when the position of the control is dynamically adjusted according to the embodiment
  • Fig. 22 exemplarily shows a second schematic diagram when the position of the control is dynamically adjusted according to the embodiment.
  • module refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that can perform functions related to the element.
  • remote control refers to a component of an electronic device (such as the display device disclosed in this application), which can generally control the electronic device wirelessly within a short distance.
  • infrared and/or radio frequency (RF) signals and/or Bluetooth are used to connect to electronic devices, and may also include functional modules such as WiFi, wireless USB, Bluetooth, and motion sensors.
  • RF radio frequency
  • a handheld touch remote control uses a user interface in a touch screen to replace most of the physical built-in hard keys in general remote control devices.
  • gesture used in this application refers to a user's behavior through a change of hand shape or hand movement to express expected ideas, actions, goals, and/or results.
  • Fig. 1 exemplarily shows a schematic diagram of an operation scenario between a display device and a control device according to an embodiment. As shown in FIG. 1, the user can operate the display device 200 through the mobile terminal 300 and the control device 100.
  • control device 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or Bluetooth protocol communication, and other short-range communication methods, and the display device 200 is controlled by wireless or other wired methods.
  • the user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, etc.
  • the user can control the display device 200 by inputting corresponding control commands through the volume plus and minus keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, and power on/off keys on the remote control. Function.
  • mobile terminals, tablet computers, computers, notebook computers, and other smart devices can also be used to control the display device 200.
  • an application program running on a smart device is used to control the display device 200.
  • the application can be configured to provide users with various controls in an intuitive user interface (UI) on the screen associated with the smart device.
  • UI intuitive user interface
  • the mobile terminal 300 can install a software application with the display device 200, realize connection communication through a network communication protocol, and realize the purpose of one-to-one control operation and data communication.
  • the mobile terminal 300 can be used to establish a control command protocol with the display device 200, the remote control keyboard can be synchronized to the mobile terminal 300, and the function of controlling the display device 200 can be realized by controlling the user interface on the mobile terminal 300. It is also possible to transmit the audio and video content displayed on the mobile terminal 300 to the display device 200 to realize the synchronous display function.
  • the display device 200 also performs data communication with the server 400 through multiple communication methods.
  • the display device 200 may be allowed to communicate through a local area network (LAN), a wireless local area network (WLAN), and other networks.
  • the server 400 may provide various contents and interactions to the display device 200.
  • the display device 200 can receive software program updates or access a remotely stored digital media library by sending and receiving information and interacting with an electronic program guide (EPG).
  • EPG electronic program guide
  • the server 400 may be one cluster or multiple clusters, and may include one or more types of servers.
  • the server 400 provides other network service content such as video-on-demand and advertising services.
  • the display device 200 may be a liquid crystal display, an OLED display, or a projection display device.
  • the specific display device type, size, resolution, etc. are not limited, and those skilled in the art can understand that the display device 200 can make some changes in performance and configuration as required.
  • the display device 200 may also additionally provide computer-supported functions of smart network TV, including but not limited to, network TV, smart TV, Internet Protocol TV (IPTV), and the like.
  • smart network TV including but not limited to, network TV, smart TV, Internet Protocol TV (IPTV), and the like.
  • IPTV Internet Protocol TV
  • FIG. 2 exemplarily shows a block diagram of the hardware configuration of the display device 200 according to an exemplary embodiment.
  • the display device 200 includes a controller 250, a tuner and demodulator 210, a communicator 220, a detector 230, an input/output interface 255, a display 275, an audio output interface 285, a memory 260, a power supply 290, At least one of the user interface 265 and the external device interface 240.
  • the display 275 is used to receive the image signal output from the first processor, and display the components of the video content and images and the menu manipulation interface.
  • the display 275 includes a display screen component for presenting images, and a driving component for driving image display.
  • the displayed video content can be from broadcast television content, or it can be said that various broadcast signals can be received through wired or wireless communication protocols. Or, it can display various image content received from the network server side from the network communication protocol.
  • the display 275 is used to present a user manipulation UI interface generated in the display device 200 and used to control the display device 200.
  • a driving component for driving the display is further included.
  • the display 275 is a projection display, and may also include a projection device and a projection screen.
  • the communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types.
  • the communicator may include at least one of Wifi chip, Bluetooth communication protocol chip, wired Ethernet communication protocol chip or other network communication protocol chip or near field communication protocol chip, and infrared receiver.
  • the display device 200 may establish control signal and data signal transmission and reception with the external control device 100 or the content providing device through the communicator 220.
  • the user interface 265 may be used to receive infrared control signals of the control device 100 (such as an infrared remote control, etc.).
  • the detector 230 is the display device 200 for collecting signals from the external environment or interacting with the outside.
  • the detector 230 includes a light receiver, a sensor used to collect the intensity of ambient light, and can adaptively display parameter changes and the like by collecting ambient light.
  • the detector 230 may also include an image collector, such as a camera, a camera, etc., which can be used to collect external environment scenes, and to collect attributes of the user or interact with the user gestures, and can adaptively change the display parameters. It can also recognize user gestures to realize the function of interacting with the user.
  • an image collector such as a camera, a camera, etc.
  • the detector 230 may also include a temperature sensor or the like, for example, by sensing the ambient temperature.
  • the display device 200 may adaptively adjust the display color temperature of the image. For example, when the temperature is relatively high, the display device 200 can be adjusted to display a colder image color temperature, or when the temperature is relatively low, the display device 200 can be adjusted to display a warmer image.
  • the detector 230 may also be a sound collector or the like, such as a microphone, which may be used to receive the user's voice.
  • a voice signal including a control instruction for the user to control the display device 200, or collecting environmental sounds is used to identify the type of environmental scene, so that the display device 200 can adapt to environmental noise.
  • the input/output interface 255 is configured to perform data transmission between the controller 250 and other external devices or other controllers 250. Such as receiving video signal data and audio signal data from external devices, or command instruction data.
  • the external device interface 240 may include, but is not limited to, the following: any one or more of high-definition multimedia interface HDMI interface, analog or data high-definition component input interface, composite video input interface, USB input interface, RGB port, etc. interface. It is also possible that the above-mentioned multiple interfaces form a composite input/output interface.
  • the tuner and demodulator 210 is configured to receive broadcast and television signals through wired or wireless reception, and can perform modulation and demodulation processing such as amplification, mixing, and resonance, from multiple wireless channels.
  • the audio and video signal is demodulated from the cable broadcast television signal, and the audio and video signal may include the television audio and video signal carried in the frequency of the television channel selected by the user, and the EPG data signal.
  • the frequency point demodulated by the tuner and demodulator 210 is controlled by the controller 250, and the controller 250 can send a control signal according to the user's selection, so that the modem responds to the TV signal frequency selected by the user and modulates and demodulates the frequency.
  • the TV signal carried by the frequency is controlled by the controller 250, and the controller 250 can send a control signal according to the user's selection, so that the modem responds to the TV signal frequency selected by the user and modulates and demodulates the frequency.
  • the TV signal carried by the frequency.
  • broadcast television signals can be classified into terrestrial broadcast signals, cable broadcast signals, satellite broadcast signals, or Internet broadcast signals according to different television signal broadcast formats. Or it can be divided into digital modulation signal, analog modulation signal, etc. according to different modulation types. Or it can be divided into digital signal, analog signal, etc. according to different signal types.
  • the controller 250 and the tuner and demodulator 210 may be located in different separate devices, that is, the tuner and demodulator 210 may also be in an external device of the main device where the controller 250 is located, such as an external set-top box. Wait. In this way, the set-top box outputs the TV audio and video signals modulated and demodulated of the received broadcast TV signals to the main device, and the main device receives the audio and video signals through the first input/output interface.
  • the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in the memory.
  • the controller 250 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
  • the object may be any one of selectable objects, such as a hyperlink or an icon.
  • Operations related to the selected object for example: display operations connected to hyperlink pages, documents, images, etc., or perform operations corresponding to the icon.
  • the user command for selecting the UI object may be a command input through various input devices (for example, a mouse, a keyboard, a touch pad, etc.) connected to the display device 200 or a voice command corresponding to the voice spoken by the user.
  • the controller 250 includes a random access memory 251 (Random Access Memory, RAM), a read-only memory 252 (Read-Only Memory, ROM), a video processor 270, an audio processor 280, and other processors 253.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • video processor 270 a video processor 270
  • audio processor 280 an audio processor 280
  • other processors 253. At least one of Graphics Processing Unit (GPU), Central Processing Unit (CPU), Communication Interface, and Communication Bus 256 (Bus).
  • the communication bus Connect the various components.
  • RAM 251 is used to store temporary data of the operating system or other running programs
  • the ROM 252 is used to store various system startup instructions.
  • the ROM 252 is used to store a basic input output system, which is called a basic input output system (Basic Input Output System, BIOS). It is used to complete the power-on self-check of the system, the initialization of each functional module in the system, the basic input/output driver of the system and the boot operating system.
  • BIOS Basic Input Output System
  • the power of the display device 200 starts to start, and the CPU runs the system startup instruction in the ROM 252 to copy the temporary data of the operating system stored in the memory to the RAM 251 to facilitate startup or operation operating system.
  • the CPU copies the temporary data of various application programs in the memory to RAM 251, and then it is convenient to start or run various application programs.
  • the CPU processor 254 is configured to execute operating system and application program instructions stored in the memory. And according to receiving various interactive instructions input from the outside, to execute various applications, data and content, so as to finally display and play various audio and video content.
  • the CPU processor 254 may include multiple processors.
  • the multiple processors may include a main processor and one or more sub-processors.
  • the main processor is used to perform some operations of the display device 200 in the pre-power-on mode, and/or to display images in the normal mode.
  • the graphics processor 253 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive instructions input by the user, and displays various objects according to the display attributes. It also includes a renderer, which renders various objects obtained based on the arithmetic unit, and the rendered objects are used for display on the display.
  • the video processor 270 is configured to receive an external video signal, and perform decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, etc. according to the standard codec protocol of the input signal. After video processing, a signal that can be directly displayed or played on the display device 200 can be obtained.
  • the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like.
  • the demultiplexing module is used to demultiplex the input audio and video data stream. For example, if MPEG-2 is input, the demultiplexing module will demultiplex into a video signal and an audio signal.
  • the video decoding module is used to process the demultiplexed video signal, including decoding and scaling.
  • An image synthesis module such as an image synthesizer, is used to superimpose and mix the GUI signal generated by the graphics generator with the zoomed video image according to the user input or itself, so as to generate an image signal for display.
  • the frame rate conversion module is used to convert the frame rate of the input video, such as converting a 60Hz frame rate to a 120Hz frame rate or a 240Hz frame rate, and the usual format is realized by inserting a frame.
  • the display formatting module is used to convert the video output signal after the received frame rate is converted, and change the signal to conform to the signal of the display format, such as outputting RGB data signals.
  • the graphics processor 253 can be integrated with the video processor, or can be configured separately.
  • the graphics signal output to the display can be processed when integrated, and different functions can be performed separately when configured separately.
  • GPU+FRC Full Rate Conversion
  • the audio processor 280 is used to receive an external audio signal, and perform decompression and decoding, as well as processing such as noise reduction, digital-to-analog conversion, and amplification processing, according to the standard codec protocol of the input signal, to obtain The sound signal played in the speaker.
  • the video processor 270 may include one or more chips.
  • the audio processor may also include one or more chips.
  • the video processor 270 and the audio processor 280 may be separate chips, or may be integrated in one or more chips together with the controller.
  • the audio output receives the sound signal output by the audio processor 280 under the control of the controller 250, such as the speaker 286, and can output to an external device in addition to the speaker carried by the display device 200 itself.
  • the external audio output terminal of the device such as an external audio interface or earphone interface, may also include a short-distance communication module in the communication interface, for example, a Bluetooth module for sound output from a Bluetooth speaker.
  • the power supply 290 under the control of the controller 250, provides power supply support for the display device 200 with power input from an external power supply.
  • the power supply 290 may include a built-in power supply circuit installed inside the display device 200, or may be an external power supply installed in the display device 200, and a power interface for providing an external power supply in the display device 200.
  • the user interface 265 is used to receive user input signals, and then send the received user input signals to the controller 250.
  • the user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
  • the user inputs user commands through the control device 100 or the mobile terminal 300, the user input interface is based on the user's input, and the display device 200 responds to the user's input through the controller 250.
  • the user may input a user command on a graphical user interface (GUI) displayed on the display 275, and the user input interface receives the user input command through the graphical user interface (GUI).
  • GUI graphical user interface
  • the user may input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
  • the "user interface” is a medium interface for interaction and information exchange between an application or operating system and a user, and it realizes the conversion between the internal form of information and the form acceptable to the user.
  • the commonly used form of the user interface is the Graphic User Interface (GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It can be an icon, window, control and other interface elements displayed on the display screen of an electronic device.
  • the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. Visual interface elements.
  • the memory 260 includes storing various software modules used to drive the display device 200.
  • various software modules stored in the first memory include: at least one of a basic module, a detection module, a communication module, a display control module, a browser module, and various service modules.
  • the basic module is used to communicate signals between various hardware in the display device 200 and send processing and control signals to the lower layer software module of the upper layer module.
  • the detection module is a management module used to collect various information from various sensors or user input interfaces, and perform digital-to-analog conversion and analysis management.
  • the voice recognition module includes a voice parsing module and a voice command database module.
  • the display control module is a module for controlling the display to display image content, and can be used to play information such as multimedia image content and UI interface.
  • the communication module is a module used for control and data communication with external devices.
  • the browser module is a module used to perform data communication between browsing servers.
  • the service module is used to provide various services and modules including various applications.
  • the memory 260 is also used to store and receive external data and user data, images of various items in various user interfaces, and visual effect diagrams of focus objects, etc.
  • Fig. 3 exemplarily shows a configuration block diagram of the control device 100 according to an exemplary embodiment.
  • the control device 100 includes a controller 110, a communication interface 130, a user input/output interface, a memory, and a power supply.
  • the control device 100 is configured to control the display device 200, and can receive input operation instructions from the user, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, so as to serve as an intermediary between the user and the display device 200.
  • the user operates the channel addition and subtraction key on the control device 100, and the display device 200 responds to the channel addition and subtraction operation.
  • control device 100 may be a smart device.
  • control device 100 can install various applications for controlling the display device 200 according to user requirements.
  • the mobile terminal 300 or other smart electronic device can perform a similar function to control the device 100 after installing an application that controls the display device 200.
  • the user can install various function keys or virtual buttons of the graphical user interface that can be provided on the mobile terminal 300 or other smart electronic devices by installing applications to realize the function of controlling the physical keys of the device 100.
  • the controller 110 includes a processor 112, a RAM 113 and a ROM 114, a communication interface 130, and a communication bus.
  • the controller is used to control the operation and operation of the control device 100, as well as the communication and cooperation between internal components, and external and internal data processing functions.
  • the communication interface 130 realizes the communication of control signals and data signals with the display device 200 under the control of the controller 110. For example, the received user input signal is sent to the display device 200.
  • the communication interface 130 may include at least one of other near field communication modules such as a WiFi chip 131, a Bluetooth module 132, and an NFC module 133.
  • the user input/output interface 140 wherein the input interface includes at least one of other input interfaces such as a microphone 141, a touch panel 142, a sensor 143, and a button 144.
  • the user can implement the user instruction input function through voice, touch, gesture, pressing and other actions.
  • the input interface converts the received analog signal into a digital signal and the digital signal into a corresponding instruction signal, and sends it to the display device 200.
  • the output interface includes an interface for sending the received user instruction to the display device 200.
  • it may be an infrared interface or a radio frequency interface.
  • the user input instruction needs to be converted into an infrared control signal according to the infrared control protocol, and then sent to the display device 200 via the infrared sending module.
  • a radio frequency signal interface a user input instruction needs to be converted into a digital signal, which is then modulated according to the radio frequency control signal modulation protocol, and then sent to the display device 200 by the radio frequency sending terminal.
  • control device 100 includes at least one of a communication interface 130 and an input/output interface 140.
  • the control device 100 is configured with a communication interface 130, such as WiFi, Bluetooth, NFC, etc. modules, which can encode user input instructions to the display device 200 through the WiFi protocol, or the Bluetooth protocol, or the NFC protocol.
  • the memory 190 is used to store various operating programs, data, and applications for driving and controlling the control device 200 under the control of the controller.
  • the memory 190 can store various control signal instructions input by the user.
  • the power supply 180 is used to provide operating power support for each element of the control device 100 under the control of the controller. Can battery and related control circuit.
  • the system may include a kernel (Kernel), a command parser (shell), a file system, and an application program.
  • Kernel a kernel
  • shell command parser
  • file system a file system
  • application program an application program.
  • the kernel, shell, and file system together form the basic operating system structure. They allow users to manage files, run programs, and use the system.
  • the kernel starts, activates the kernel space, abstracts hardware, initializes hardware parameters, etc., runs and maintains virtual memory, scheduler, signals, and inter-process communication (IPC).
  • IPC inter-process communication
  • the Shell and user applications are loaded.
  • the application is started, it is compiled into machine code to form a process.
  • the system is divided into four layers, from top to bottom, respectively, the application (Applications) layer (referred to as the “application layer”), and the Application Framework layer (referred to as the “framework layer”). “), Android runtime and system library layer (referred to as “system runtime library layer”), and kernel layer.
  • These applications may be Window programs, system setting programs, clock programs, camera applications, etc., which are included in the operating system; they may also be developed by third parties.
  • the application programs developed by the author such as Hi Jian program, K song program, magic mirror program, etc.
  • the application package in the application layer is not limited to the above examples, and may actually include other application packages, which is not limited in the embodiment of the present application.
  • the framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer is equivalent to a processing center, which decides to let applications in the application layer take actions. Through the API interface, the application can access the resources in the system and obtain the services of the system during execution.
  • the application framework layer in the embodiment of the present application includes managers (Managers), content providers (Content Provider), etc., where the manager includes at least one of the following modules: It interacts with all activities running in the system; Location Manager is used to provide system services or applications with access to system location services; Package Manager is used to retrieve current installations on the device Various information related to the application package; Notification Manager is used to control the display and clearing of notification messages; Window Manager is used to manage icons, windows, toolbars, and wallpapers on the user interface And desktop widgets.
  • Managers includes at least one of the following modules: It interacts with all activities running in the system; Location Manager is used to provide system services or applications with access to system location services; Package Manager is used to retrieve current installations on the device Various information related to the application package; Notification Manager is used to control the display and clearing of notification messages; Window Manager is used to manage icons, windows, toolbars, and wallpapers on the user interface And desktop widgets.
  • the activity manager is used to: manage the life cycle of each application and the usual navigation rollback functions, such as controlling the exit of the application (including switching the user interface currently displayed in the display window to the system desktop), opening , Back (including switching the user interface currently displayed in the display window to the upper level user interface of the currently displayed user interface), etc.
  • the window manager is used to manage all window programs, such as obtaining the size of the display screen, determining whether there is a status bar, locking the screen, capturing the screen, and controlling changes in the display window (for example, shrinking the display window, dithering, or distorting the display window). Deformation display, etc.) and so on.
  • the system runtime layer provides support for the upper layer, that is, the framework layer.
  • the Android operating system will run the C/C++ library included in the system runtime layer to implement functions to be implemented by the framework layer.
  • the kernel layer is a layer between hardware and software. As shown in Figure 4, the kernel layer contains at least one of the following drivers: audio driver, display driver, Bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor, touch Sensors, pressure sensors, etc.) etc.
  • the kernel layer further includes a power drive module for power management.
  • the software programs and/or modules corresponding to the software architecture in FIG. 4 are stored in the first memory or the second memory shown in FIG. 2 or FIG. 3.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes the input operation into the original input event (including the value of the input operation, the time stamp of the input operation and other information).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, recognizes the control corresponding to the input event according to the current position of the focus, and regards the input operation as a confirmation operation.
  • the control corresponding to the confirmation operation is the control of the magic mirror application icon.
  • the mirror application calls the interface of the application framework layer, starts the magic mirror application, and then starts the camera driver by calling the kernel layer to realize the capture of still images or videos through the camera.
  • the display device receives input operations (such as split-screen operations) that the user acts on the display screen, and the kernel layer can generate corresponding input operations based on the input operations. Enter the event and report the event to the application framework layer.
  • the activity manager of the application framework layer sets the window mode (such as multi-window mode) and the window position and size corresponding to the input operation.
  • the window management of the application framework layer draws the window according to the settings of the activity manager, and then sends the drawn window data to the display driver of the kernel layer, and the display driver displays the corresponding application interface in different display areas of the display screen.
  • the application layer includes at least one application that can display corresponding icon controls on the display, such as: live TV application icon controls, video on demand application icon controls, media center applications Program icon controls, application center icon controls, game application icon controls, etc.
  • the live TV application can provide live TV through different signal sources.
  • a live TV application may use input from cable TV, wireless broadcasting, satellite services, or other types of live TV services to provide TV signals.
  • the live TV application can display the video of the live TV signal on the display device 200.
  • video-on-demand applications can provide videos from different storage sources. Unlike live TV applications, VOD provides video display from certain storage sources. For example, video on demand can come from the server side of cloud storage, and from the local hard disk storage that contains stored video programs.
  • the media center application can provide various multimedia content playback applications.
  • the media center can provide services that are different from live TV or video on demand, and users can access various images or audio through the media center application.
  • the application center may provide for storing various application programs.
  • the application program may be a game, an application program, or some other application program that is related to a computer system or other equipment but can be run on a smart TV.
  • the application center can obtain these applications from different sources, store them in the local storage, and then run on the display device 200.
  • the display device 200 can display not only some ordinary pictures, but also panoramic pictures.
  • a panoramic picture is a picture displayed through a wide-angle representation method, and a panoramic picture can express more images of the surrounding environment. Generally, the panoramic picture cannot be completely displayed on the display device 200.
  • the panoramic picture needs to be adjusted so that other contents of the panoramic picture can be adjusted to the display area of the display device 200.
  • the operator when browsing panoramic pictures on the display device 200, the operator usually needs to use the remote control to adjust the display content of the panoramic picture. Move in the corresponding direction to display the content that was not displayed before in the opposite direction; or the display device 200 collects the operator’s gesture, and then adjusts the displayed panoramic image content according to the direction of the gesture, for example, the operator’s finger is on the display device Swipe to the right on the 200 screen, then the panoramic picture will move to the right on the screen, and then display the content that was not displayed on the left side of the panoramic picture before.
  • all panoramic pictures are moved according to a fixed step preset in the display device 200.
  • the operator only wants to fine-tune the content of a certain panoramic picture, if it is fixed If the step size is too large, the distance the panoramic picture moves on the screen of the display device 200 will be too large, and it is difficult to meet the operator's requirement for fine adjustment of the panoramic picture.
  • the present application provides a panoramic picture browsing method and display device, which can control the moving direction and distance of the panoramic picture on the display device 200 through the operator’s face angle change in front of the display device 200, and then operate The content of the panoramic picture that the user needs to see is adjusted to the screen of the display device 200.
  • the display 275 can be used to display a panoramic picture
  • the detector 230 can be used to collect an image of a person in front of the display device 200
  • the controller 110 can be used to identify the target person based on the image collected by the detector 230 and recognize
  • the face angle information of the target person in front of the display device 200 can also calculate the offset distance of the panoramic picture and control the movement of the panoramic picture.
  • FIG. 6 is a schematic diagram of interaction between a display device 200 and an operator according to an embodiment of the application.
  • the operator when browsing panoramic pictures using the panoramic picture browsing method provided by the embodiment of the present application, the operator needs to stand in front of the display device 200, and the operator controls the panoramic picture on the display device 200 by turning his head. Move to show what you want to watch.
  • Fig. 7 is a flowchart of a panoramic picture browsing method shown in an embodiment of the application. As shown in Figure 7, the panoramic picture browsing method specifically includes the following steps:
  • step S101 when the display device 200 displays a panoramic picture, a target person in front of the display device 200 is identified, and the target person is used to indicate an operator who browses the panoramic picture in front of the display device 200.
  • the target person mentioned in this embodiment specifically refers to the actual operator.
  • multiple people in front of the display device 200 are viewing panoramic pictures at the same time.
  • the display device 200 can only receive operations from one person. , Then this person can be the target person.
  • the person in front of the display device is identified first by means of face recognition, and then it is specifically determined whether the person is the target person.
  • the way to select the target person is not unique.
  • the person closest to the display device 200 can be selected as the target person.
  • the pixel area of each face in front of the display device 200 needs to be compared. Generally, the closer to the display device 200, the pixels of the face The larger the area; or if a specific person is selected as the target person, it is necessary to compare whether there is a specific face among the faces in front of the display device 200.
  • the step of identifying the target person in front of the display device 200 may specifically include:
  • Step S201 Detect whether there is a preset person in front of the display device 200, and the preset person is used to represent an operator who browses a panoramic picture pre-stored in the display device.
  • each display device 200 stores a preset person.
  • the preset character can be set during initialization.
  • the display device 200 browses the panoramic picture for the first time, it can be recognized whether there is this preset character in front of the display device 200; in addition, the preset character can also be the previous panoramic picture browse.
  • the target person saved last time can be used as the preset person, and it is recognized whether the preset person exists before the display device 200.
  • step S202 if it exists, it is determined that the preset person in front of the display device 200 is the target person.
  • the step of identifying the target person in front of the display device 200 may further include:
  • step S301 in the case that there are multiple characters in front of the display device 200 and there is no preset character, the pixel area corresponding to the face of each character is calculated respectively.
  • the person closest to the display device 200 is selected.
  • the distance between a person and the display device 200 is determined by the pixel area of the person's face. The closer the person is to the display device 200, the larger the pixel area of the person's face.
  • the controller 110 can identify the face images belonging to the person, and then further calculate the pixel area corresponding to each face image.
  • step S302 the person corresponding to the face with the largest pixel area is selected as the target person.
  • FIG. 8 is a schematic diagram of a detector 230 collecting images according to an embodiment of the application.
  • the controller 110 can recognize that the person standing in the front has the largest face pixel area, and then determine that the person is the target person.
  • the dashed frame in FIG. 8 is the face range of the target person recognized by the controller 110.
  • the controller 110 can regard this person as the target person. And then control the movement of the panoramic picture.
  • Step S102 Obtain the initial face angle and the current face angle of the target person before and after the first preset duration.
  • the panoramic picture needs to be adjusted according to the change in the angle of the target person's face.
  • the angle of the face has a certain change time, and this time is the first preset duration.
  • the controller 110 may obtain the face rotation angle of the target person in front of the display device 200 after recognizing the target person.
  • This angle can refer to the three angles in the aerospace field.
  • FIG. 9 is a schematic diagram of the aircraft angle shown in an embodiment of the application. As shown in FIG. And the yaw angle is achieved.
  • the concepts of roll angle, pitch angle, and yaw angle are also used to define the face angle of the target person, where the roll angle refers to the angle at which the face rotates, and the pitch angle refers to the up or down of the face.
  • the yaw angle refers to the angle at which the face rotates in the horizontal direction.
  • the initial face angle before and after the first preset duration and the current face angle are generally different.
  • the initial face angle specifically includes the three angles of the face, which are the initial roll angle, the initial pitch angle, and the initial yaw angle;
  • the current face angle also specifically includes the three angles of the face, which are the current roll angle, Current pitch angle and current yaw angle.
  • the initial roll angle, initial pitch angle, and initial yaw angle of the target person in front of the display device 200, and then obtain again after the first preset time period has elapsed.
  • the current roll angle, current pitch angle, and current yaw angle of the target person it is necessary to obtain the initial roll angle, initial pitch angle, and initial yaw angle of the target person in front of the display device 200, and then obtain again after the first preset time period has elapsed.
  • the current roll angle, current pitch angle, and current yaw angle of the target person it is necessary to obtain the initial roll angle, initial pitch angle, and initial yaw angle of the target person in front of the display device 200, and then obtain again after the first preset time period has elapsed.
  • Step S103 Obtain the offset distance of the panoramic picture by using the initial face angle and the current face angle.
  • the angle of the face will change after the first preset time period, and the current face angle will have a certain offset from the initial face angle. In this embodiment, it is Use this offset to obtain the offset distance corresponding to the panoramic picture.
  • FIG. 10 is a schematic diagram of another interaction between a display device 200 and an operator according to an embodiment of the application.
  • the display device 200 when the display device 200 displays a panoramic picture, it can only display a screen as large as the screen of the display 275. If the operator wants to browse the content on the right side of the panoramic picture that is not displayed on the screen, he can face the display device 200 and turn his face to the right.
  • the display device 200 will calculate the direction of the panoramic picture according to the angle at which the operator turns the face. By the distance of the left offset, the panoramic picture is then controlled to shift to the left, and then more content on the right side is displayed on the display device 200.
  • the operation of the operator to turn the face is not absolutely right or left, and it will have a certain offset downwards or downwards.
  • the movement of the panoramic picture on the display 275 is not absolute to the left or right or up and down.
  • the operator can also turn his face in a specific direction to browse content in a specific direction according to his own browsing needs. For example, the operator Rotate the face to the upper right direction, and the panoramic picture moves in the reverse direction to the lower left to display the content in the upper right direction. Therefore, in this embodiment, it is necessary to detect and calculate the three angles of the face rotation to obtain a more accurate offset distance of the panoramic picture.
  • Step S104 Adjust the display content of the panoramic picture on the display device 200 according to the offset distance.
  • the panoramic picture displayed on the display device 200 is a two-dimensional picture, and the direction in which the two-dimensional picture moves includes two directions, horizontal and vertical. Therefore, the calculated offset distance includes the offset distance of the panoramic picture in the horizontal direction and the offset distance in the vertical direction.
  • the specific adjustment method is not limited to the direction of the panoramic picture movement being opposite to the direction of the operator's face rotation, making the direction of the panoramic picture movement the same as the direction of the operator's face rotation can also achieve the purpose of browsing the panoramic picture in this embodiment. .
  • the face of the target person identified in the above steps can be displayed on the screen of the display 275, so that the target person can always observe the angle of his face, and then appropriately determine the rotation angle based on the range of the panoramic picture movement.
  • the specific display position can be set in the upper right corner of the screen, etc. For some other characters in front of the display device 200, after observing the target person on the screen, they will also know who the actual operator is, so that they will not be too close to the display device 200, so as not to affect the viewing of panoramic pictures.
  • the panoramic picture browsing method in the embodiment of the present application can realize the browsing of the panoramic picture by displaying the face angle change of the target person in front of the device, and the offset distance of the panoramic picture can be determined by the angle of the target person’s face rotation.
  • the size is determined, so that the operator can browse the panoramic picture by himself according to his own needs, and there is no need to adjust the panoramic picture according to a fixed moving step.
  • the step of obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle includes:
  • Step S401 Calculate the horizontal offset distance of the panoramic picture by using the initial roll angle, the current roll angle, the initial yaw angle and the current yaw angle.
  • the following formula may be used to calculate the horizontal offset distance of the panoramic picture:
  • Offset X represents the horizontal offset distance of the panoramic picture
  • Roll 1 and Roll 2 represent the initial roll angle and the current roll angle
  • Yaw 1 and Yaw 2 represent the initial yaw angle and the current yaw angle
  • A1 represents the correction Coefficient
  • delta X represents the minimum adjustment angle value in the horizontal direction
  • Time represents the minimum adjustment offset time.
  • Step S402 Calculate the vertical offset distance of the panoramic picture by using the initial pitch angle, the current pitch angle, the initial roll angle and the current roll angle.
  • the following formula may be used to calculate the vertical offset distance of the panoramic picture:
  • Offset Y represents the vertical offset distance of the panoramic picture
  • Roll 1 and Roll 2 represent the initial roll angle and the current roll angle
  • Pitch 1 and Pitch 2 represent the initial pitch angle and the current pitch angle
  • A2 represents the correction coefficient
  • delta Y represents the minimum adjustment angle value in the vertical direction
  • Time represents the minimum adjustment offset time.
  • delta X and delta Y need to be determined according to the resolution of the panoramic picture.
  • the larger the resolution of the panoramic picture the larger the angle of delta X and delta Y ;
  • Time needs to be determined according to the first preset duration , Generally, the longer the first preset duration, the longer the Time.
  • the values of A1 and A2 are usually less than 0.5.
  • FIG. 11 is a schematic diagram of the initial face angle of the target person obtained by the controller 110 according to an embodiment of the application
  • FIG. 12 is a schematic diagram of the current face angle of the target person obtained by the controller 110 according to an embodiment of the application.
  • the content in the dashed frame represents the face range of the target person recognized by the controller 110.
  • the initial roll angle of the target person’s face Roll 1 is -3.2°
  • the initial yaw angle Yaw 1 is 4.2°
  • the initial pitch angle Pitch 1 is -20.9°
  • the current roll of the target person’s face
  • the angle Roll 2 is -0.4°
  • the current yaw angle Yaw 2 is 20.5°
  • the current pitch angle Pitch 2 is -2.0°.
  • the controller 110 can finally control the panoramic picture to move a corresponding distance in the horizontal direction and the vertical direction, thereby displaying more content of the panoramic picture.
  • the controller 110 may also pre-determine the coordinate system on the screen of the display 275, and may establish a two-dimensional rectangular coordinate system with the four vertices of the screen as the coordinate origin. , You can also establish a two-dimensional rectangular coordinate system with the exact center of the screen as the coordinate origin, and then move the panoramic picture based on the established coordinate system.
  • the following steps are further included:
  • step S501 if the target person in front of the display device 200 is not recognized after a preset period of time, the pixel area corresponding to each human face currently existing in front of the display device 200 is recalculated.
  • the operator who initially operated the panoramic picture browsing may quit the operation midway, but other operators still need to continue browsing.
  • the detector 230 cannot recognize the original target person again, and a new target person needs to be determined again.
  • the method for determining the new target person is as described in the foregoing, and the face pixel area of all the characters in front of the display device 200 can also be judged, and then the person closest to the display device 200 is selected as the target person.
  • step S502 the person corresponding to the face with the largest pixel area is selected as the target person.
  • the controller 110 can identify the face images belonging to the person, and then further calculate the pixel area corresponding to each face image.
  • Step S503 Obtain the initial face angle of the target person.
  • Step S504 After a second preset time period, obtain the current face angle of the target person.
  • the second preset duration in this embodiment is essentially a preset duration, but the time range of the second preset duration is larger than the time range of the first preset duration
  • the first preset duration is T1
  • the second preset duration may be 2T1.
  • the first preset duration T1 is usually set to 0.3 seconds.
  • the panoramic picture may also be adjusted after waiting for a certain period of time. This length of time needs to ensure the efficiency of the panoramic picture adjustment and the adjustment effect of the panoramic picture. It is usually set to 2 seconds.
  • the panoramic picture browsing method not only needs to recognize the target person and determine whether the target person has changed, but also determine whether the currently browsed panoramic picture has changed. If the panoramic picture is adjusted according to the offset distance, After the picture is displayed on the display device 200, the controller 110 detects that the panoramic picture currently displayed on the display 275 is not the previous one, and then the process of identifying the target person in the above embodiment needs to be performed again for the current new panoramic picture.
  • the controller 110 detects that the panoramic picture currently displayed on the display 275 is still the previous picture, then controls The device 110 can directly recognize the face angle change of the target person in front of the display device 200, so as to realize the next adjustment of the panoramic picture.
  • the embodiment of the present application provides a panoramic picture browsing method.
  • the target person in front of the display device 200 can be recognized, and then the initial face angle and the target person’s initial face angle and The current face angle after the preset duration is used to calculate the offset distance that the panoramic picture on the display device 200 needs to move by using the initial face angle and the current face angle.
  • the panoramic picture is adjusted according to the offset distance so that the panoramic picture has not been previously
  • the displayed content is displayed on the display device 200.
  • the solution of the present application can realize the browsing of the panoramic picture through the change of the face angle of the target person in front of the display device 200, and the offset distance of the panoramic picture can be determined by the angle of the target person’s face rotation, and the operator can follow his own You need to browse the panoramic pictures by yourself, no need to adjust the panoramic pictures according to the fixed moving step.
  • the present application also provides a display device 200, including: a display 275; a detector 230 for collecting an image in front of the display device 200; and a controller 110 configured to: recognize when the display device 200 displays a panoramic picture Display the target person in front of the device 200, where the target person is used to indicate the operator who browses the panoramic picture in front of the display device 200; obtain the initial face angle and the current face angle of the target person before and after the first preset duration; The offset distance of the panoramic picture is obtained by using the initial face angle and the current face angle; and the display content of the panoramic picture on the display device 200 is adjusted according to the offset distance.
  • a display device 200 including: a display 275; a detector 230 for collecting an image in front of the display device 200; and a controller 110 configured to: recognize when the display device 200 displays a panoramic picture Display the target person in front of the device 200, where the target person is used to indicate the operator who browses the panoramic picture in front of the display device 200; obtain the initial face angle and the current face angle of the target
  • the controller 110 is further configured to detect whether there is a preset person in front of the display device 200, and the preset person is used to represent the operator who browses the panoramic picture pre-stored in the display device 200; if there is , It is determined that the preset person in front of the display device 200 is the target person.
  • the controller 110 is further configured to: when multiple characters exist in front of the display device 200 and there are no preset characters, respectively calculate the pixel area corresponding to the face of each character; select The person corresponding to the face with the largest pixel area is the target person.
  • the controller 110 is further configured to: obtain the initial roll angle, the initial pitch angle, and the initial yaw angle of the face of the target person in front of the display device 200; after the first preset time period has elapsed, obtain the display device 200 The current roll angle, current pitch angle, and current yaw angle of the front target person's face.
  • the controller 110 is further configured to calculate the horizontal offset distance of the panoramic picture using the initial roll angle, the current roll angle, the initial yaw angle, and the current yaw angle; The pitch angle, the initial roll angle and the current roll angle are used to calculate the vertical offset distance of the panoramic picture.
  • controller 110 is further configured to calculate the horizontal offset distance of the panoramic picture by using the following formula:
  • Offset X represents the horizontal offset distance of the panoramic picture
  • Roll 1 and Roll 2 represent the initial roll angle and the current roll angle
  • Yaw 1 and Yaw 2 represent the initial yaw angle and the current yaw angle
  • A1 represents the correction Coefficient
  • delta X represents the minimum adjustment angle value in the horizontal direction
  • Time represents the minimum adjustment offset time.
  • controller 110 is further configured to calculate the vertical offset distance of the panoramic picture by using the following formula:
  • Offset Y represents the vertical offset distance of the panoramic picture
  • Roll 1 and Roll 2 represent the initial roll angle and the current roll angle
  • Pitch 1 and Pitch 2 represent the initial pitch angle and the current pitch angle
  • A2 represents the correction coefficient
  • delta Y represents the minimum adjustment angle value in the vertical direction
  • Time represents the minimum adjustment offset time.
  • the controller 110 is further configured to: if the target person in front of the display device 200 is not recognized after a preset period of time, recalculate the pixel area corresponding to each face currently existing in the display device 200; The person corresponding to the face with the largest pixel area is selected as the target person; the initial face angle of the target person is obtained; after a second preset time period, the current face angle of the target person is obtained.
  • the display device provided by the embodiment of the present invention can control the control when the user moves the position while using the display device. Adjust the position according to the user's movement. For example, if the user moves to the left in front of the display device, the control follows the user's movement and moves the corresponding position to the left in the display interface, so that the perspective of the user and the control remains unchanged, thereby ensuring the user Regardless of the position of the display device, the display content of the control can be clearly seen from a certain angle of view.
  • a display device provided by an embodiment of the present invention includes a controller, and a display and a camera respectively communicating with the controller.
  • the camera is configured to collect environmental image data.
  • the environmental image data is used to characterize the user's position parameters relative to the display.
  • the camera sends the collected environmental image data to the controller, and the controller can obtain the user's position parameters; the position parameters include user and The vertical distance between the displays and the position of the user's face frame on the display interface when the center point of the user's face frame falls vertically on the display; the face frame is where the camera falls on the user's face when the camera captures the image of the user in front of the display device
  • the center point of the face frame can be the center position of the face frame or the center position between the two pupils of the user.
  • the display is configured to present a display interface, and a target control is displayed in the display interface.
  • the target control can be a notification, a pop-up frame, or a floating window.
  • the control key that realizes the function of dynamically adjusting the position of the control can be configured in the controller.
  • the control on the display page needs to be controlled by the display device to adjust the position following the movement of the user, it can be turned on in advance
  • the control key enables the display device to dynamically adjust the position of the control. If the control key is not turned on, the control is displayed normally, and when the user moves, the control does not adjust the position according to the user's movement.
  • FIG. 13 exemplarily shows a flowchart of a method for dynamically adjusting a control according to an embodiment.
  • the controller when realizing the dynamic adjustment of the control, the controller realizes the control based on the face recognition and distance detection algorithm. Specifically, the controller is configured to perform the following steps:
  • the display device After the user turns on the control key, the display device has the function of dynamically adjusting the position of the control, and the controller obtains the environmental image data collected by the camera in real time. If the position of the user using the display device changes, the position before the position change is taken as the initial position of the user, and the position after the position change is taken as the end position of the user.
  • the environmental image data corresponding to the user's initial position and the environmental image data corresponding to the end position can be obtained.
  • the environmental image data corresponding to different positions can represent different relative distances between the user and the display and different positions on the display interface when the center point of the user's face frame falls vertically on the display.
  • the controller recognizes the number of human faces on the environmental image data, and when only one human face is recognized, continues to execute the subsequent method of dynamically adjusting the control.
  • the controller is further configured to: receive the environmental image data collected by the camera; recognize the number of faces in the environmental image data; when the number of faces in the environmental image data is 1, execute the acquisition of user initial position parameters and user end Positional parameter steps.
  • the controller can control the normal display of the target control without executing the method of dynamically adjusting the control.
  • the controller can also choose one of the users as the target follower, and the target follower is used as the control control to adjust the position. in accordance with.
  • FIG. 14 exemplarily shows a flowchart of a method for obtaining the initial position parameter of a target control according to an embodiment
  • FIG. 15 exemplarily shows a schematic diagram of a reference coordinate system according to an embodiment.
  • the controller is executed to obtain the initial position parameters of the target control, and is further configured to perform the following steps:
  • a reference coordinate system can be established in the display interface.
  • the coordinate origin O of the reference coordinate system is set at the upper left corner of the display interface
  • the positive direction of the X axis is the direction from the left to the right of the display interface
  • the positive direction of the Y axis is the direction from the top to the bottom of the display interface. direction.
  • S122 Obtain the number of pixels at the origin of the coordinates and the number of horizontal pixels and the number of vertical pixels at the control center point of the target control.
  • the initial position parameter of the target control can be represented by coordinate values, and the horizontal and vertical coordinate values can be calculated according to the pixel points of the control center point of the target control.
  • the controller can obtain the resolution of the current display device, and then can determine the number of pixels at the origin of the coordinate and the number of pixels at the control center point of the target control. Since the coordinate origin is located on the leftmost side of the display interface, the number of pixels at which the coordinate origin can be determined equivalently is 0.
  • the coordinate position of the control center point M of the target control is used to represent.
  • the number of horizontal pixels and the number of vertical pixels of the control center point M of the target control are respectively obtained.
  • the number of horizontal pixels refers to the number of pixels contained in the X-axis direction between the control center point M of the target control and the coordinate origin O.
  • the number of vertical pixels refers to the number of pixels contained in the Y-axis direction between the control center point M of the target control and the coordinate origin O.
  • S123 Calculate the difference in the number of horizontal pixels between the number of pixels at the origin of the coordinates and the number of horizontal pixels at the center of the control, and the difference between the number of pixels at the origin of the coordinates and the number of vertical pixels at the center of the control.
  • the number of pixels of the coordinate origin O is 0, and the coordinates of the corresponding coordinate origin are (0, 0).
  • the number of horizontal pixels of the control center point is P 1
  • the number of vertical pixels of the control center point is P 2
  • the pixel coordinates of the corresponding control center point are (P 1 , P 2 ).
  • the number of pixels included in its display interface is certain, that is, one resolution corresponds to a set of pixel numbers. Therefore, the difference between two adjacent pixels can be obtained, that is, the length value of each pixel. If the pixel is square, the length and width of the pixel are the same. By multiplying the difference in the number of pixels by the length of each pixel, the corresponding distance can be obtained.
  • the coordinates of the control center point of the target control are determined by the horizontal initial distance and the vertical initial distance, that is, the coordinates of the control center point are (L 1 , L 2 ); the pixels of the control center point of the target control are determined by the number of horizontal and vertical pixels Point coordinates, that is, the pixel coordinates of the center point of the control are (P 1 , P 2 ). The coordinates of the control center point and the pixel coordinates of the control center point are used as the initial position parameters of the target control.
  • the length of its long side is fixed at about 135.5cm.
  • the controller can be based on the acquired environment
  • the user's initial position parameter and the user's end position parameter are directly retrieved from the image data.
  • FIG. 16 exemplarily shows a schematic diagram of the environmental image data corresponding to the initial position in the embodiment
  • FIG. 17 exemplarily shows a schematic diagram of the environmental image data corresponding to the end position in the embodiment.
  • the controller when the user is in the initial position in front of the display device, the controller can directly obtain the vertical distance (relative distance) between the user and the display from the corresponding environmental image data, for example, 1.70 m as shown in FIG. 16.
  • the controller when the user moves from the initial position to the end position, the controller can directly obtain the vertical distance (relative distance) between the user and the display from the environmental image data corresponding to the end position, for example, 2.24 shown in Fig. 17 m.
  • FIG. 18 exemplarily shows a schematic diagram of the position change of the center point of the face frame on the display interface during the movement of the user according to the embodiment.
  • the position on the display interface is point X
  • the AX line is perpendicular to the display interface.
  • the position on the display interface is point N
  • the BN connection is perpendicular to the display interface.
  • the line AX is the vertical distance (relative distance) between the user and the display
  • point X is when the center point of the user's face frame falls vertically on the display.
  • the user's initial position parameter can be determined by connecting AX and point X.
  • the line BN is the vertical distance (relative distance) between the user and the display
  • point N is the position on the display interface when the center point of the user's face frame falls vertically on the display. Therefore, connecting BN and point N can determine the user end position parameter.
  • the display In order to ensure that the user’s viewing angle when viewing the display device remains the same, that is, the user’s viewing angle when viewing the display content of the target control remains the same, no matter where the user is, he can clearly view the display content of the target control, the display provided in this embodiment The device needs to control the target control to follow the user's movement to adjust the position when the user moves the position.
  • the offset of the target control can be calculated according to the user's initial position parameter and the user's end position parameter, and the position parameter that the target control needs to move is determined by the offset.
  • the user’s initial position parameters include the user’s initial relative distance to the display and initial position parameters.
  • the user’s end position parameters include the user’s end relative distance to the display and the end position parameters.
  • the user’s corresponding position parameters refer to the parameters of the center point of the face frame. .
  • the initial position parameter refers to the position on the display interface when the center point of the user's face frame falls vertically on the display when the user is in the initial position.
  • the end position parameter refers to the position on the display interface where the center point of the user's face frame falls vertically on the display when the user moves to the end position.
  • Fig. 19 exemplarily shows a flow chart of the method for calculating the offset of the target control according to the embodiment.
  • the controller is further configured to perform the following steps in the process of calculating the offset of the target control based on the user's initial position parameter and the user's end position parameter:
  • the first distance refers to the planar distance between the position X on the display interface and the control center point M of the target control when the center point of the user's face frame falls vertically on the display when the user is at the initial position A;
  • the distance refers to the planar distance between the position N on the display interface and the control center point M of the target control when the center point of the user's face frame falls vertically on the display when the user moves to the end position B.
  • the first distance (XM line) represents the horizontal plane distance along the display interface
  • the second distance (NM line) represents the horizontal plane distance along the display interface.
  • the first distance and the second distance can be calculated by the pixel point difference between the center point of the face frame and the control center point of the target control.
  • the controller executes the calculation of the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control, it is further configured to:
  • Step 311 Obtain the number of pixels at the center point of the face frame when the user is at the initial position and the number of pixels at the control center point of the target control.
  • Step 312 Calculate the difference between the number of pixels at the center of the face frame and the number of pixels at the center of the control.
  • Step 313 Calculate the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control according to the difference in the number of pixels and the length value of each pixel.
  • the number of pixels at the center point of the face frame when the user is at the initial position can be obtained by the controller from the environmental image data corresponding to the initial position, and the number of pixels at the control center point of the target control can be obtained according to the system properties. Both pixel points can read the corresponding pixel point coordinates in the reference coordinate system.
  • the number of pixels at the center point of the face frame does not change in the Y-axis direction. Therefore, the number of pixels at the center point of the face frame corresponding to the initial position and the control center point
  • the number of pixels that is, the difference in the number of pixels is calculated based on the number of horizontal pixels at the center of the face frame and the number of horizontal pixels at the center of the control.
  • the specific calculation method of the pixel point difference and the first distance please refer to the content of steps S121 to S124 provided in the foregoing embodiment, which will not be repeated here.
  • the controller performs the calculation of the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control, which is further configured to:
  • Step 321 Obtain the number of pixels at the center point of the face frame when the user is at the end position and the number of pixels at the center point of the control of the target control.
  • Step 322 Calculate the difference between the number of pixels at the center of the face frame and the number of pixels at the center of the control.
  • Step 323 Calculate the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control according to the difference in the number of pixels and the length value of each pixel.
  • the number of pixels at the center point of the face frame has not changed in the Y-axis direction. Therefore, the number of pixels at the center point of the face frame corresponding to the end position and the control center point.
  • the number of pixels, that is, the difference in the number of pixels is calculated based on the number of horizontal pixels at the center of the face frame and the number of horizontal pixels at the center of the control.
  • the specific calculation method of the pixel point difference and the second distance can refer to the content of steps S121 to S124 provided in the foregoing embodiment, which will not be repeated here.
  • the user’s viewing angle of the target control is the same as the viewing angle of the target control when the user is at the initial position. Therefore, it is necessary to change the target control’s To adjust the position, it is necessary to determine the theoretical second distance required for the user to view the target control with the same angle of view when the user moves to the end position.
  • the controller calculates the theoretical second distance when the user moves to the end position based on the initial relative distance, the end relative distance, and the first distance as follows:
  • S 2 ′ is the theoretical second distance
  • S 1 is the first distance
  • AX is the initial relative distance
  • BN is the ending relative distance.
  • FIG. 20 exemplarily shows a schematic diagram when the theoretical second distance is determined according to the embodiment.
  • the line between the center point of the face frame and the center point of the control must be the same as the line between the center point of the face and the display interface.
  • the theoretical second distance is the theoretical distance between the end position corresponding to the user and the end position M'of the target control. Therefore, the offset Offset of the target control is obtained according to the distance difference between the theoretical second distance and the second distance.
  • the offset can realize the distance from the control center point M of the target control to the point M'.
  • the end position parameter of the target control is obtained, and the target control is moved to the position corresponding to the end position parameter.
  • the initial position parameter of the target control and the offset of the target control it is possible to determine the end position of the target control after the position needs to be adjusted, and realize the position adjustment of the target control according to the end position parameter.
  • FIG. 21 exemplarily shows the first schematic diagram when the position of the control is dynamically adjusted according to the embodiment.
  • point M is the initial position parameter of the target control
  • point M' is the end position parameter of the target control. Move the target control from point M to point M'to realize the position adjustment of the target control when the user moves from the initial position A to the end position B.
  • the end position parameter of the target control initial position parameter-offset.
  • the foregoing embodiment is based on a situation in which the position adjustment of the target control is realized when the user moves from the initial position A to the end position B.
  • the user may also move in the vertical direction during the movement, that is, the user changes from a standing state to a sitting state. At this time, in the vertical direction, the user also has a position in the Y-axis direction. Change.
  • the display device In order to adapt to the situation where the user changes both in the X-axis direction and the Y-axis direction, the display device provided by the embodiment of the present invention needs to determine the horizontal offset and the vertical offset when determining the offset of the target control. For example, when the user changes from a state of standing directly in front of the display device to sitting on a chair at the rear left, the target control needs to be controlled to move from the initial position to the lower left corner.
  • the user initial position parameter includes a horizontal initial position parameter and a vertical initial position parameter
  • the user end position parameter includes a horizontal end position parameter and a vertical end position parameter.
  • the horizontal initial position parameter includes the horizontal initial relative distance and the horizontal initial position parameter of the user in the initial position relative to the display
  • the vertical initial position parameter includes the vertical initial relative distance and the vertical initial position parameter of the user in the initial position relative to the display.
  • the horizontal end position parameter includes the horizontal initial relative distance and the horizontal initial position parameter of the user at the end position relative to the display
  • the vertical end position parameter includes the vertical initial relative distance and the vertical initial position parameter of the user at the end position relative to the display.
  • the vertical relative distance refers to the corresponding distance when the center point of the face frame moves along the Y axis, that is, the height of the center point of the face frame when the user is standing and the center point of the face frame when the user sits down The height difference.
  • the vertical position parameter refers to the position on the display interface when the center point of the user's face frame falls vertically on the display when the user moves vertically to the end position.
  • the controller calculates the offset of the target control based on the user's initial position parameter and the user's end position parameter, and is further configured to:
  • Step 701 Calculate the lateral offset of the target control based on the lateral initial position parameter and the lateral end position parameter.
  • Step 702 Calculate the longitudinal offset of the target control based on the longitudinal initial position parameter and the longitudinal end position parameter.
  • step S3 When calculating the horizontal offset and vertical offset of the target control, you can refer to all the content described in step S3 provided in the foregoing embodiment for calculation, which can realize the calculation of the target from the horizontal initial position parameter and the horizontal end position parameter.
  • the horizontal offset of the control, and the vertical offset of the target control are calculated from the vertical initial position parameter and the vertical end position parameter. The specific calculation process is not repeated here.
  • the final position parameter of the target control after the adjustment of the target control can be determined according to the initial position parameter of the target control.
  • the initial position parameter of the target control includes a horizontal initial position parameter and a vertical initial position parameter. Therefore, the determined end position parameter of the target control also includes the horizontal end position parameter and the vertical end position parameter of the target control.
  • the controller obtains the end position parameter of the target control based on the initial position parameter of the target control and the offset of the target control, and is further configured to:
  • Step 801 Calculate the horizontal end position parameter of the target control according to the horizontal initial position parameter and the horizontal offset of the target control.
  • Step 802 Calculate the longitudinal end position parameter of the target control according to the longitudinal initial position parameter and the longitudinal offset of the target control.
  • the horizontal initial position parameter and horizontal offset of the target control you can determine the horizontal end position of the target control after adjusting the position, and according to the vertical initial position parameter and vertical offset of the target control, you can determine the target control needs The longitudinal end position after adjusting the position.
  • the horizontal termination position parameter and the vertical termination position parameter the position adjustment of the target control is realized, so that the target control can adjust the position following the movement of the user.
  • Fig. 22 exemplarily shows a second schematic diagram when the position of the control is dynamically adjusted according to the embodiment.
  • the target control needs to be controlled by The initial position moves to the lower left corner.
  • the horizontal end position parameter of the target control the horizontal initial position parameter-the horizontal offset
  • the vertical end position parameter of the target control the vertical initial position parameter + the vertical offset
  • the controller when the user moves from the initial position to the end position, receives the environmental image data corresponding to the initial position collected by the camera and the corresponding end position.
  • Environmental image data to obtain user initial position parameters and user end position parameters; calculate the offset of the target control according to the user initial position parameters and user end position parameters; obtain based on the initial position parameters of the target control and the offset of the target control The end position parameter of the target control, move the target control to the position corresponding to the end position parameter.
  • the display device provided by the embodiment of the present invention can adjust the position of the target control following the movement of the user, so that the user can watch the target control in any direction within the visual range of the camera of the display device, thereby ensuring that the user Both can clearly see the display content of the control and improve the user's subjective visual experience.
  • FIG. 13 exemplarily shows a flowchart of a method for dynamically adjusting a control according to an embodiment.
  • This application also provides a method for dynamically adjusting controls, which is executed by a controller in a display device, and the method includes the following steps:
  • the calculating the offset of the target control based on the user initial position parameter and the user end position parameter includes:
  • the user's initial position parameter includes the user's initial relative distance to the display and initial position parameters
  • the user end position parameter includes the user's end relative distance to the display and end position parameters
  • the user's corresponding position parameter refers to a person Parameters of the center point of the face frame
  • the position parameter corresponding to the control refers to the parameter of the center point of the control
  • the end relative distance calculates the theoretical second distance when the user moves to the end position, and the theoretical second distance is used to characterize the end position corresponding to the user and the termination of the target control
  • the theoretical distance between locations
  • the distance difference between the theoretical second distance and the second distance is calculated to obtain the offset of the target control.
  • the calculating the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control includes:
  • the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control is calculated.
  • the calculating the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control includes:
  • the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control is calculated.
  • the calculation of the theoretical second distance when the user moves to the end position based on the initial relative distance, the end relative distance, and the first distance includes:
  • S 2 ′ is the theoretical second distance
  • S 1 is the first distance
  • AX is the initial relative distance
  • BN is the ending relative distance.
  • the obtaining the initial position parameter of the target control includes:
  • the horizontal initial distance and the vertical initial distance between the control center point of the target control and the coordinate origin are calculated, and the horizontal initial distance and the vertical initial distance are calculated.
  • the initial distance, the number of horizontal pixels and the number of vertical pixels of the control center point are used as the initial position parameters of the target control.
  • the calculating the offset of the target control based on the user initial position parameter and the user end position parameter includes:
  • the user initial position parameter includes a horizontal initial position parameter and a vertical initial position parameter
  • the user end position parameter includes a horizontal end position parameter and a vertical end position parameter
  • the longitudinal offset of the target control is calculated.
  • the obtaining the end position parameter of the target control based on the initial position parameter of the target control and the offset of the target control includes:
  • the initial position parameter of the target control includes a horizontal initial position parameter and a vertical initial position parameter
  • the longitudinal end position parameter of the target control is calculated.
  • it further includes:
  • the step of acquiring the user's initial position parameter and the user's end position parameter is performed.
  • the present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for dynamically adjusting controls provided by the present invention when the program is executed.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (English: read-only memory, abbreviated as ROM) or a random access memory (English: random access memory, abbreviated as: RAM), etc.

Abstract

The present application provides a display method and a display device. When the display device displays a panoramic picture, a target person in front of the display device can be identified, and next, an initial face angle of the target person and a current face angle after a preset duration are separately obtained; an offset distance that the panoramic picture on the display device needs to move is calculated using the initial face angle and the current face angle; and finally, the panoramic picture is adjusted according to the offset distance to enable the content of the panoramic picture that has not been displayed before to be displayed on the display device.

Description

一种显示方法及显示设备Display method and display device
本申请要求在2020年4月27日提交中国专利局、申请号为202010342885.9申请名称为“一种动态调整控件的方法及显示设备”的优先权;其全部内容通过引用结合在本申请中;本申请要求在2020年6月18日提交中国专利局、申请号为202010559804.0申请名称为“全景图片浏览方法及显示设备”的优先权;其全部内容通过引用结合在本申请中。This application requires the priority of being submitted to the Chinese Patent Office on April 27, 2020, with the application number 202010342885.9 and the application titled "A method and display device for dynamically adjusting controls"; the entire content of which is incorporated into this application by reference; this The application requires that it be submitted to the Chinese Patent Office on June 18, 2020, and the application number is 202010559804.0 and the priority of the application titled "Panoramic Picture Browsing Method and Display Device"; the entire content of which is incorporated into this application by reference.
技术领域Technical field
本申请涉及智能电视技术领域,尤其涉及一种显示方法及显示设备。This application relates to the technical field of smart TVs, and in particular to a display method and display device.
背景技术Background technique
全景图片是一种通过广角的表现手段展示的图片,全景图片可以表现更多周围环境的图像。通常全景图片不能完整地显示在显示设备上,当需要浏览该全景图片当前显示内容以外的内容时,需要对全景图片进行调整,以便将全景图片的其他内容调整到显示设备的显示区域。A panoramic picture is a picture displayed through a wide-angle representation method, and a panoramic picture can express more images of the surrounding environment. Generally, the panoramic picture cannot be completely displayed on the display device. When content other than the current display content of the panoramic picture needs to be browsed, the panoramic picture needs to be adjusted so that other contents of the panoramic picture can be adjusted to the display area of the display device.
目前,在显示设备上浏览全景图片时,通常需要使用遥控器对全景图片的显示内容进行调整,例如按下遥控器上的方向键,可以将全景图片向显示设备屏幕的对应方向上移动,进而显示出相反方向上之前并未显示的内容;或者采集操作者的手势,然后根据手势的方向对显示的全景图片内容进行调整,例如操作者的手指在显示设备屏幕上向右滑动,那么全景图片会在屏幕向右移动,进而显示出之前全景图片左侧并没有显示的内容。At present, when browsing panoramic pictures on a display device, it is usually necessary to use the remote control to adjust the display content of the panoramic picture. For example, press the arrow keys on the remote control to move the panoramic picture in the corresponding direction on the screen of the display device. Display the content that was not displayed before in the opposite direction; or collect the operator’s gestures, and then adjust the displayed panoramic picture content according to the direction of the gesture, for example, the operator’s finger slides to the right on the display device screen, then the panoramic picture It will move to the right on the screen to display the content that was not displayed on the left side of the previous panoramic picture.
发明内容Summary of the invention
本申请提供了一种显示方法及显示设备。This application provides a display method and display device.
第一方面,本申请提供了一种全景图片浏览方法,包括:In the first aspect, this application provides a panoramic picture browsing method, including:
在显示设备显示全景图片的情况下,识别显示设备前的目标人物,所述目标人物用于表示在显示设备前浏览全景图片的操作者;In the case that the display device displays a panoramic picture, identifying a target person in front of the display device, where the target person is used to indicate an operator who browses the panoramic picture in front of the display device;
获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度;Acquiring the initial facial angle and the current facial angle of the target person before and after the first preset duration;
利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离;Obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle;
按照所述偏移距离,调整所述全景图片在显示设备上的显示内容。Adjust the display content of the panoramic picture on the display device according to the offset distance.
第二方面,本申请还提供了一种显示设备,包括:In the second aspect, this application also provides a display device, including:
显示器;monitor;
检测器,用于采集显示设备前的图像;Detector, used to collect the image in front of the display device;
控制器,被配置为:The controller is configured as:
在显示设备显示全景图片的情况下,识别显示设备前的目标人物,所述目标人物用于表示在显示设备前浏览全景图片的操作者;In the case that the display device displays a panoramic picture, identifying a target person in front of the display device, where the target person is used to indicate an operator who browses the panoramic picture in front of the display device;
获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度;Acquiring the initial facial angle and the current facial angle of the target person before and after the first preset duration;
利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离;Obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle;
按照所述偏移距离,调整所述全景图片在显示设备上的显示内容。Adjust the display content of the panoramic picture on the display device according to the offset distance.
附图说明Description of the drawings
为了更清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地 介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solution of the present application more clearly, the following will briefly introduce the drawings needed in the embodiments. Obviously, for those of ordinary skill in the art, without creative work, Other drawings can be obtained from these drawings.
图1中示例性示出了根据一些实施例的显示设备与控制装置之间操作场景的示意图;FIG. 1 exemplarily shows a schematic diagram of an operation scene between a display device and a control device according to some embodiments;
图2中示例性示出了根据一些实施例的显示设备200的硬件配置框图;FIG. 2 exemplarily shows a hardware configuration block diagram of a display device 200 according to some embodiments;
图3中示例性示出了根据一些实施例的控制设备100的硬件配置框图;FIG. 3 exemplarily shows a block diagram of the hardware configuration of the control device 100 according to some embodiments;
图4中示例性示出了根据一些实施例的显示设备200中软件配置示意图;FIG. 4 exemplarily shows a schematic diagram of software configuration in a display device 200 according to some embodiments;
图5中示例性示出了根据一些实施例的显示设备200中应用程序的图标控件界面显示示意图;FIG. 5 exemplarily shows a schematic diagram of the icon control interface display of the application program in the display device 200 according to some embodiments;
图6为本申请实施例示出的一种显示设备200与操作者之间的交互示意图;FIG. 6 is a schematic diagram of interaction between a display device 200 and an operator according to an embodiment of the application;
图7为本申请实施例示出的一种全景图片浏览方法的流程图;FIG. 7 is a flowchart of a panoramic picture browsing method shown in an embodiment of the application;
图8为本申请实施例示出的一种检测器230采集图像的示意图;FIG. 8 is a schematic diagram of a detector 230 collecting images according to an embodiment of the application;
图9为本申请实施例示出的飞行器角度示意图;FIG. 9 is a schematic diagram of the angle of the aircraft shown in an embodiment of the application;
图10为本申请实施例示出的另一种显示设备200与操作者之间的交互示意图;FIG. 10 is a schematic diagram of interaction between another display device 200 and an operator according to an embodiment of the application;
图11为本申请实施例示出的控制器110获得的目标人物初始脸部角度的示意图;FIG. 11 is a schematic diagram of the initial face angle of the target person obtained by the controller 110 according to an embodiment of the application; FIG.
图12为本申请实施例示出的控制器110获得的目标人物当前脸部角度的示意图;FIG. 12 is a schematic diagram of the current face angle of the target person obtained by the controller 110 according to an embodiment of the application; FIG.
图13中示例性示出了根据实施例中动态调整控件的方法的流程图;FIG. 13 exemplarily shows a flowchart of a method for dynamically adjusting a control according to an embodiment;
图14中示例性示出了根据实施例中获取目标控件的初始位置参数的方法流程图;FIG. 14 exemplarily shows a flowchart of a method for obtaining the initial position parameter of a target control according to an embodiment;
图15中示例性示出了根据实施例中参考坐标系的示意图;FIG. 15 exemplarily shows a schematic diagram of the reference coordinate system according to the embodiment;
图16中示例性示出了根据实施例中初始位置对应的环境图像数据的示意图;FIG. 16 exemplarily shows a schematic diagram of environmental image data corresponding to the initial position in the embodiment;
图17中示例性示出了根据实施例中结束位置对应的环境图像数据的示意图;FIG. 17 exemplarily shows a schematic diagram of environmental image data corresponding to the end position in the embodiment;
图18中示例性示出了根据实施例中用户移动过程中人脸框中心点落在显示界面的位置变化示意图;FIG. 18 exemplarily shows a schematic diagram of the position change of the center point of the face frame on the display interface during the movement of the user according to the embodiment;
图19中示例性示出了根据实施例中计算目标控件的偏移量的方法流程图;FIG. 19 exemplarily shows a flowchart of a method for calculating the offset of a target control according to an embodiment;
图20中示例性示出了根据实施例中确定理论第二距离时的示意图;FIG. 20 exemplarily shows a schematic diagram when the theoretical second distance is determined according to the embodiment;
图21中示例性示出了根据实施例中动态调整控件位置时的第一示意图;FIG. 21 exemplarily shows the first schematic diagram when the position of the control is dynamically adjusted according to the embodiment;
图22中示例性示出了根据实施例中动态调整控件位置时的第二示意图。Fig. 22 exemplarily shows a second schematic diagram when the position of the control is dynamically adjusted according to the embodiment.
具体实施方式Detailed ways
为使本申请的目的、实施方式和优点更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,所描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。In order to make the purpose, implementation and advantages of the present application clearer, the exemplary implementations of the present application will be described clearly and completely with reference to the accompanying drawings in the exemplary embodiments of the present application. Obviously, the described exemplary embodiments It is only a part of the embodiments of the present application, but not all of the embodiments.
基于本申请描述的示例性实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请所附权利要求保护的范围。此外,虽然本申请中公开内容按照示范性一个或几个实例来介绍,但应理解,可以就这些公开内容的各个方面也可以单独构成一个完整实施方式。Based on the exemplary embodiments described in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the appended claims of the present application. In addition, although the disclosure in this application is introduced in accordance with one or several exemplary examples, it should be understood that various aspects of these disclosures can also be individually constituted as a complete implementation.
需要说明的是,本申请中对于术语的简要说明,仅是为了方便理解接下来描述的实施方式,而不是意图限定本申请的实施方式。除非另有说明,这些术语应当按照其普通和通常的含义理解。It should be noted that the brief description of the terms in this application is only for the convenience of understanding the embodiments described below, and is not intended to limit the embodiments of this application. Unless otherwise stated, these terms should be understood according to their ordinary and usual meanings.
本申请中说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”等是用于区别类似或同类的对象或实体,而不必然意味着限定特定的顺序或先后次序,除非另外注 明(Unless otherwise indicated)。应该理解这样使用的用语在适当情况下可以互换,例如能够根据本申请实施例图示或描述中给出那些以外的顺序实施。The terms "first", "second", "third", etc. in the specification and claims of this application and the above-mentioned drawings are used to distinguish similar or similar objects or entities, and do not necessarily mean to limit specific Order or precedence, unless otherwise indicated (Unless otherwise indicated). It should be understood that the terms used in this way can be interchanged under appropriate circumstances, for example, they can be implemented in an order other than those given in the illustrations or descriptions of the embodiments of the present application.
此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖但不排他的包含,例如,包含了一系列组件的产品或设备不必限于清楚地列出的那些组件,而是可包括没有清楚地列出的或对于这些产品或设备固有的其它组件。In addition, the terms "including" and "having" and any variations of them are intended to cover but not exclusively include. For example, a product or device including a series of components need not be limited to those clearly listed, but may include Other components not clearly listed or inherent to these products or equipment.
本申请中使用的术语“模块”,是指任何已知或后来开发的硬件、软件、固件、人工智能、模糊逻辑或硬件或/和软件代码的组合,能够执行与该元件相关的功能。The term "module" used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that can perform functions related to the element.
本申请中使用的术语“遥控器”,是指电子设备(如本申请中公开的显示设备)的一个组件,通常可在较短的距离范围内无线控制电子设备。一般使用红外线和/或射频(RF)信号和/或蓝牙与电子设备连接,也可以包括WiFi、无线USB、蓝牙、动作传感器等功能模块。例如:手持式触摸遥控器,是以触摸屏中用户界面取代一般遥控装置中的大部分物理内置硬键。The term "remote control" used in this application refers to a component of an electronic device (such as the display device disclosed in this application), which can generally control the electronic device wirelessly within a short distance. Generally, infrared and/or radio frequency (RF) signals and/or Bluetooth are used to connect to electronic devices, and may also include functional modules such as WiFi, wireless USB, Bluetooth, and motion sensors. For example, a handheld touch remote control uses a user interface in a touch screen to replace most of the physical built-in hard keys in general remote control devices.
本申请中使用的术语“手势”,是指用户通过一种手型的变化或手部运动等动作,用于表达预期想法、动作、目的/或结果的用户行为。The term "gesture" used in this application refers to a user's behavior through a change of hand shape or hand movement to express expected ideas, actions, goals, and/or results.
图1中示例性示出了根据实施例中显示设备与控制装置之间操作场景的示意图。如图1中示出,用户可通过移动终端300和控制装置100操作显示设备200。Fig. 1 exemplarily shows a schematic diagram of an operation scenario between a display device and a control device according to an embodiment. As shown in FIG. 1, the user can operate the display device 200 through the mobile terminal 300 and the control device 100.
在一些实施例中,控制装置100可以是遥控器,遥控器和显示设备的通信包括红外协议通信或蓝牙协议通信,及其他短距离通信方式等,通过无线或其他有线方式来控制显示设备200。用户可以通过遥控器上按键,语音输入、控制面板输入等输入用户指令,来控制显示设备200。如:用户可以通过遥控器上音量加减键、频道控制键、上/下/左/右的移动按键、语音输入按键、菜单键、开关机按键等输入相应控制指令,来实现控制显示设备200的功能。In some embodiments, the control device 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or Bluetooth protocol communication, and other short-range communication methods, and the display device 200 is controlled by wireless or other wired methods. The user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, etc. For example, the user can control the display device 200 by inputting corresponding control commands through the volume plus and minus keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, and power on/off keys on the remote control. Function.
在一些实施例中,也可以使用移动终端、平板电脑、计算机、笔记本电脑、和其他智能设备以控制显示设备200。例如,使用在智能设备上运行的应用程序控制显示设备200。该应用程序通过配置可以在与智能设备关联的屏幕上,在直观的用户界面(UI)中为用户提供各种控制。In some embodiments, mobile terminals, tablet computers, computers, notebook computers, and other smart devices can also be used to control the display device 200. For example, an application program running on a smart device is used to control the display device 200. The application can be configured to provide users with various controls in an intuitive user interface (UI) on the screen associated with the smart device.
在一些实施例中,移动终端300可与显示设备200安装软件应用,通过网络通信协议实现连接通信,实现一对一控制操作的和数据通信的目的。如:可以实现用移动终端300与显示设备200建立控制指令协议,将遥控控制键盘同步到移动终端300上,通过控制移动终端300上用户界面,实现控制显示设备200的功能。也可以将移动终端300上显示音视频内容传输到显示设备200上,实现同步显示功能。In some embodiments, the mobile terminal 300 can install a software application with the display device 200, realize connection communication through a network communication protocol, and realize the purpose of one-to-one control operation and data communication. For example, the mobile terminal 300 can be used to establish a control command protocol with the display device 200, the remote control keyboard can be synchronized to the mobile terminal 300, and the function of controlling the display device 200 can be realized by controlling the user interface on the mobile terminal 300. It is also possible to transmit the audio and video content displayed on the mobile terminal 300 to the display device 200 to realize the synchronous display function.
如图1中还示出,显示设备200还与服务器400通过多种通信方式进行数据通信。可允许显示设备200通过局域网(LAN)、无线局域网(WLAN)和其他网络进行通信连接。服务器400可以向显示设备200提供各种内容和互动。示例的,显示设备200通过发送和接收信息,以及电子节目指南(EPG)互动,接收软件程序更新,或访问远程储存的数字媒体库。服务器400可以是一个集群,也可以是多个集群,可以包括一类或多类服务器。通过服务器400提供视频点播和广告服务等其他网络服务内容。As also shown in FIG. 1, the display device 200 also performs data communication with the server 400 through multiple communication methods. The display device 200 may be allowed to communicate through a local area network (LAN), a wireless local area network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. For example, the display device 200 can receive software program updates or access a remotely stored digital media library by sending and receiving information and interacting with an electronic program guide (EPG). The server 400 may be one cluster or multiple clusters, and may include one or more types of servers. The server 400 provides other network service content such as video-on-demand and advertising services.
显示设备200,可以液晶显示器、OLED显示器、投影显示设备。具体显示设备类型,尺寸大小和分辨率等不作限定,本领技术人员可以理解的是,显示设备200可以根据需要做性能和配置上一些改变。The display device 200 may be a liquid crystal display, an OLED display, or a projection display device. The specific display device type, size, resolution, etc. are not limited, and those skilled in the art can understand that the display device 200 can make some changes in performance and configuration as required.
显示设备200除了提供广播接收电视功能之外,还可以附加提供计算机支持功能的智能网络电视功能,包括但不限于,网络电视、智能电视、互联网协议电视(IPTV)等。In addition to providing broadcast receiving TV functions, the display device 200 may also additionally provide computer-supported functions of smart network TV, including but not limited to, network TV, smart TV, Internet Protocol TV (IPTV), and the like.
图2中示例性示出了根据示例性实施例中显示设备200的硬件配置框图。FIG. 2 exemplarily shows a block diagram of the hardware configuration of the display device 200 according to an exemplary embodiment.
在一些实施例中,显示设备200中包括控制器250、调谐解调器210、通信器220、检测器230、输入/输出接口255、显示器275,音频输出接口285、存储器260、供电电源290、用户接口265、外部装置接口240中的至少一种。In some embodiments, the display device 200 includes a controller 250, a tuner and demodulator 210, a communicator 220, a detector 230, an input/output interface 255, a display 275, an audio output interface 285, a memory 260, a power supply 290, At least one of the user interface 265 and the external device interface 240.
在一些实施例中,显示器275,用于接收源自第一处理器输出的图像信号,进行显示视频内容和图像以及菜单操控界面的组件。In some embodiments, the display 275 is used to receive the image signal output from the first processor, and display the components of the video content and images and the menu manipulation interface.
在一些实施例中,显示器275,包括用于呈现画面的显示屏组件,以及驱动图像显示的驱动组件。In some embodiments, the display 275 includes a display screen component for presenting images, and a driving component for driving image display.
在一些实施例中,显示视频内容,可以来自广播电视内容,也可以是说,可通过有线或无线通信协议接收的各种广播信号。或者,可显示来自网络通信协议接收来自网络服务器端发送的各种图像内容。In some embodiments, the displayed video content can be from broadcast television content, or it can be said that various broadcast signals can be received through wired or wireless communication protocols. Or, it can display various image content received from the network server side from the network communication protocol.
在一些实施例中,显示器275用于呈现显示设备200中产生且用于控制显示设备200的用户操控UI界面。In some embodiments, the display 275 is used to present a user manipulation UI interface generated in the display device 200 and used to control the display device 200.
在一些实施例中,根据显示器275类型不同,还包括用于驱动显示的驱动组件。In some embodiments, depending on the type of the display 275, a driving component for driving the display is further included.
在一些实施例中,显示器275为一种投影显示器,还可以包括一种投影装置和投影屏幕。In some embodiments, the display 275 is a projection display, and may also include a projection device and a projection screen.
在一些实施例中,通信器220是用于根据各种通信协议类型与外部设备或外部服务器进行通信的组件。例如:通信器可以包括Wifi芯片,蓝牙通信协议芯片,有线以太网通信协议芯片等其他网络通信协议芯片或近场通信协议芯片,以及红外接收器中的至少一种。In some embodiments, the communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example, the communicator may include at least one of Wifi chip, Bluetooth communication protocol chip, wired Ethernet communication protocol chip or other network communication protocol chip or near field communication protocol chip, and infrared receiver.
在一些实施例中,显示设备200可以通过通信器220与外部控制设备100或内容提供设备之间建立控制信号和数据信号发送和接收。In some embodiments, the display device 200 may establish control signal and data signal transmission and reception with the external control device 100 or the content providing device through the communicator 220.
在一些实施例中,用户接口265,可用于接收控制装置100(如:红外遥控器等)红外控制信号。In some embodiments, the user interface 265 may be used to receive infrared control signals of the control device 100 (such as an infrared remote control, etc.).
在一些实施例中,检测器230是显示设备200用于采集外部环境或与外部交互的信号。In some embodiments, the detector 230 is the display device 200 for collecting signals from the external environment or interacting with the outside.
在一些实施例中,检测器230包括光接收器,用于采集环境光线强度的传感器,可以通过采集环境光可以自适应性显示参数变化等。In some embodiments, the detector 230 includes a light receiver, a sensor used to collect the intensity of ambient light, and can adaptively display parameter changes and the like by collecting ambient light.
在一些实施例中,检测器230还可以包括图像采集器,如相机、摄像头等,可以用于采集外部环境场景,以及用于采集用户的属性或与用户交互手势,可以自适应变化显示参数,也可以识别用户手势,以实现与用户之间互动的功能。In some embodiments, the detector 230 may also include an image collector, such as a camera, a camera, etc., which can be used to collect external environment scenes, and to collect attributes of the user or interact with the user gestures, and can adaptively change the display parameters. It can also recognize user gestures to realize the function of interacting with the user.
在一些实施例中,检测器230还可以包括温度传感器等,如通过感测环境温度。In some embodiments, the detector 230 may also include a temperature sensor or the like, for example, by sensing the ambient temperature.
在一些实施例中,显示设备200可自适应调整图像的显示色温。如当温度偏高的环境时,可调整显示设备200显示图像色温偏冷色调,或当温度偏低的环境时,可以调整显示设备200显示图像偏暖色调。In some embodiments, the display device 200 may adaptively adjust the display color temperature of the image. For example, when the temperature is relatively high, the display device 200 can be adjusted to display a colder image color temperature, or when the temperature is relatively low, the display device 200 can be adjusted to display a warmer image.
在一些实施例中,检测器230还可声音采集器等,如麦克风,可以用于接收用户的声音。示例性的,包括用户控制显示设备200的控制指令的语音信号,或采集环境声音,用于识别环境场景类型,使得显示设备200可以自适应环境噪声。In some embodiments, the detector 230 may also be a sound collector or the like, such as a microphone, which may be used to receive the user's voice. Exemplarily, a voice signal including a control instruction for the user to control the display device 200, or collecting environmental sounds, is used to identify the type of environmental scene, so that the display device 200 can adapt to environmental noise.
在一些实施例中,如图2所示,输入/输出接口255被配置为,可进行控制器250与外部其他设备或其他控制器250之间的数据传输。如接收外部设备的视频信号数据和音频信 号数据、或命令指令数据等。In some embodiments, as shown in FIG. 2, the input/output interface 255 is configured to perform data transmission between the controller 250 and other external devices or other controllers 250. Such as receiving video signal data and audio signal data from external devices, or command instruction data.
在一些实施例中,外部装置接口240可以包括,但不限于如下:可以高清多媒体接口HDMI接口、模拟或数据高清分量输入接口、复合视频输入接口、USB输入接口、RGB端口等任一个或多个接口。也可以是上述多个接口形成复合性的输入/输出接口。In some embodiments, the external device interface 240 may include, but is not limited to, the following: any one or more of high-definition multimedia interface HDMI interface, analog or data high-definition component input interface, composite video input interface, USB input interface, RGB port, etc. interface. It is also possible that the above-mentioned multiple interfaces form a composite input/output interface.
在一些实施例中,如图2所示,调谐解调器210被配置为,通过有线或无线接收方式接收广播电视信号,可以进行放大、混频和谐振等调制解调处理,从多个无线或有线广播电视信号中解调出音视频信号,该音视频信号可以包括用户所选择电视频道频率中所携带的电视音视频信号,以及EPG数据信号。In some embodiments, as shown in FIG. 2, the tuner and demodulator 210 is configured to receive broadcast and television signals through wired or wireless reception, and can perform modulation and demodulation processing such as amplification, mixing, and resonance, from multiple wireless channels. Or the audio and video signal is demodulated from the cable broadcast television signal, and the audio and video signal may include the television audio and video signal carried in the frequency of the television channel selected by the user, and the EPG data signal.
在一些实施例中,调谐解调器210解调的频点受到控制器250的控制,控制器250可根据用户选择发出控制信号,以使的调制解调器响应用户选择的电视信号频率以及调制解调该频率所携带的电视信号。In some embodiments, the frequency point demodulated by the tuner and demodulator 210 is controlled by the controller 250, and the controller 250 can send a control signal according to the user's selection, so that the modem responds to the TV signal frequency selected by the user and modulates and demodulates the frequency. The TV signal carried by the frequency.
在一些实施例中,广播电视信号可根据电视信号广播制式不同区分为地面广播信号、有线广播信号、卫星广播信号或互联网广播信号等。或者根据调制类型不同可以区分为数字调制信号,模拟调制信号等。或者根据信号种类不同区分为数字信号、模拟信号等。In some embodiments, broadcast television signals can be classified into terrestrial broadcast signals, cable broadcast signals, satellite broadcast signals, or Internet broadcast signals according to different television signal broadcast formats. Or it can be divided into digital modulation signal, analog modulation signal, etc. according to different modulation types. Or it can be divided into digital signal, analog signal, etc. according to different signal types.
在一些实施例中,控制器250和调谐解调器210可以位于不同的分体设备中,即调谐解调器210也可在控制器250所在的主体设备的外置设备中,如外置机顶盒等。这样,机顶盒将接收到的广播电视信号调制解调后的电视音视频信号输出给主体设备,主体设备经过第一输入/输出接口接收音视频信号。In some embodiments, the controller 250 and the tuner and demodulator 210 may be located in different separate devices, that is, the tuner and demodulator 210 may also be in an external device of the main device where the controller 250 is located, such as an external set-top box. Wait. In this way, the set-top box outputs the TV audio and video signals modulated and demodulated of the received broadcast TV signals to the main device, and the main device receives the audio and video signals through the first input/output interface.
在一些实施例中,控制器250,通过存储在存储器上中各种软件控制程序,来控制显示设备的工作和响应用户的操作。控制器250可以控制显示设备200的整体操作。例如:响应于接收到用于选择在显示器275上显示UI对象的用户命令,控制器250便可以执行与由用户命令选择的对象有关的操作。In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in the memory. The controller 250 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
在一些实施例中,所述对象可以是可选对象中的任何一个,例如超链接或图标。与所选择的对象有关操作,例如:显示连接到超链接页面、文档、图像等操作,或者执行与所述图标相对应程序的操作。用于选择UI对象用户命令,可以是通过连接到显示设备200的各种输入装置(例如,鼠标、键盘、触摸板等)输入命令或者与由用户说出语音相对应的语音命令。In some embodiments, the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, for example: display operations connected to hyperlink pages, documents, images, etc., or perform operations corresponding to the icon. The user command for selecting the UI object may be a command input through various input devices (for example, a mouse, a keyboard, a touch pad, etc.) connected to the display device 200 or a voice command corresponding to the voice spoken by the user.
如图2所示,控制器250包括随机存取存储器251(Random Access Memory,RAM)、只读存储器252(Read-Only Memory,ROM)、视频处理器270、音频处理器280、其他处理器253(例如:图形处理器(Graphics Processing Unit,GPU)、中央处理器254(Central Processing Unit,CPU)、通信接口(Communication Interface),以及通信总线256(Bus)中的至少一种。其中,通信总线连接各个部件。As shown in FIG. 2, the controller 250 includes a random access memory 251 (Random Access Memory, RAM), a read-only memory 252 (Read-Only Memory, ROM), a video processor 270, an audio processor 280, and other processors 253. (For example: at least one of Graphics Processing Unit (GPU), Central Processing Unit (CPU), Communication Interface, and Communication Bus 256 (Bus). Among them, the communication bus Connect the various components.
在一些实施例中,RAM 251用于存储操作系统或其他正在运行中的程序的临时数据In some embodiments, RAM 251 is used to store temporary data of the operating system or other running programs
在一些实施例中,ROM 252用于存储各种系统启动的指令。In some embodiments, the ROM 252 is used to store various system startup instructions.
在一些实施例中,ROM 252用于存储一个基本输入输出系统,称为基本输入输出系统(Basic Input Output System,BIOS)。用于完成对系统的加电自检、系统中各功能模块的初始化、系统的基本输入/输出的驱动程序及引导操作系统。In some embodiments, the ROM 252 is used to store a basic input output system, which is called a basic input output system (Basic Input Output System, BIOS). It is used to complete the power-on self-check of the system, the initialization of each functional module in the system, the basic input/output driver of the system and the boot operating system.
在一些实施例中,在收到开机信号时,显示设备200电源开始启动,CPU运行ROM 252中系统启动指令,将存储在存储器的操作系统的临时数据拷贝至RAM 251中,以便于启动或运行操作系统。当操作系统启动完成后,CPU再将存储器中各种应用程序的临时数据 拷贝至RAM 251中,然后,以便于启动或运行各种应用程序。In some embodiments, when the power-on signal is received, the power of the display device 200 starts to start, and the CPU runs the system startup instruction in the ROM 252 to copy the temporary data of the operating system stored in the memory to the RAM 251 to facilitate startup or operation operating system. After the operating system is started up, the CPU copies the temporary data of various application programs in the memory to RAM 251, and then it is convenient to start or run various application programs.
在一些实施例中,CPU处理器254,用于执行存储在存储器中操作系统和应用程序指令。以及根据接收外部输入的各种交互指令,来执行各种应用程序、数据和内容,以便最终显示和播放各种音视频内容。In some embodiments, the CPU processor 254 is configured to execute operating system and application program instructions stored in the memory. And according to receiving various interactive instructions input from the outside, to execute various applications, data and content, so as to finally display and play various audio and video content.
在一些示例性实施例中,CPU处理器254,可以包括多个处理器。多个处理器可包括一个主处理器以及一个或多个子处理器。主处理器,用于在预加电模式中执行显示设备200一些操作,和/或在正常模式下显示画面的操作。一个或多个子处理器,用于在待机模式等状态下一种操作。In some exemplary embodiments, the CPU processor 254 may include multiple processors. The multiple processors may include a main processor and one or more sub-processors. The main processor is used to perform some operations of the display device 200 in the pre-power-on mode, and/or to display images in the normal mode. One or more sub-processors, used for an operation in a state such as standby mode.
在一些实施例中,图形处理器253,用于产生各种图形对象,如:图标、操作菜单、以及用户输入指令显示图形等。包括运算器,通过接收用户输入各种交互指令进行运算,根据显示属性显示各种对象。以及包括渲染器,对基于运算器得到的各种对象,进行渲染,上述渲染后的对象用于显示在显示器上。In some embodiments, the graphics processor 253 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive instructions input by the user, and displays various objects according to the display attributes. It also includes a renderer, which renders various objects obtained based on the arithmetic unit, and the rendered objects are used for display on the display.
在一些实施例中,视频处理器270被配置为将接收外部视频信号,根据输入信号的标准编解码协议,进行解压缩、解码、缩放、降噪、帧率转换、分辨率转换、图像合成等等视频处理,可得到直接可显示设备200上显示或播放的信号。In some embodiments, the video processor 270 is configured to receive an external video signal, and perform decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, etc. according to the standard codec protocol of the input signal. After video processing, a signal that can be directly displayed or played on the display device 200 can be obtained.
在一些实施例中,视频处理器270,包括解复用模块、视频解码模块、图像合成模块、帧率转换模块、显示格式化模块等。In some embodiments, the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like.
其中,解复用模块,用于对输入音视频数据流进行解复用处理,如输入MPEG-2,则解复用模块进行解复用成视频信号和音频信号等。Among them, the demultiplexing module is used to demultiplex the input audio and video data stream. For example, if MPEG-2 is input, the demultiplexing module will demultiplex into a video signal and an audio signal.
视频解码模块,则用于对解复用后的视频信号进行处理,包括解码和缩放处理等。The video decoding module is used to process the demultiplexed video signal, including decoding and scaling.
图像合成模块,如图像合成器,其用于将图形生成器根据用户输入或自身生成的GUI信号,与缩放处理后视频图像进行叠加混合处理,以生成可供显示的图像信号。An image synthesis module, such as an image synthesizer, is used to superimpose and mix the GUI signal generated by the graphics generator with the zoomed video image according to the user input or itself, so as to generate an image signal for display.
帧率转换模块,用于对转换输入视频帧率,如将60Hz帧率转换为120Hz帧率或240Hz帧率,通常的格式采用如插帧方式实现。The frame rate conversion module is used to convert the frame rate of the input video, such as converting a 60Hz frame rate to a 120Hz frame rate or a 240Hz frame rate, and the usual format is realized by inserting a frame.
显示格式化模块,则用于将接收帧率转换后视频输出信号,改变信号以符合显示格式的信号,如输出RGB数据信号。The display formatting module is used to convert the video output signal after the received frame rate is converted, and change the signal to conform to the signal of the display format, such as outputting RGB data signals.
在一些实施例中,图形处理器253可以和视频处理器可以集成设置,也可以分开设置,集成设置的时候可以执行输出给显示器的图形信号的处理,分离设置的时候可以分别执行不同的功能,例如GPU+FRC(Frame Rate Conversion))架构。In some embodiments, the graphics processor 253 can be integrated with the video processor, or can be configured separately. The graphics signal output to the display can be processed when integrated, and different functions can be performed separately when configured separately. For example, GPU+FRC (Frame Rate Conversion)) architecture.
在一些实施例中,音频处理器280,用于接收外部的音频信号,根据输入信号的标准编解码协议,进行解压缩和解码,以及降噪、数模转换、和放大处理等处理,得到可以在扬声器中播放的声音信号。In some embodiments, the audio processor 280 is used to receive an external audio signal, and perform decompression and decoding, as well as processing such as noise reduction, digital-to-analog conversion, and amplification processing, according to the standard codec protocol of the input signal, to obtain The sound signal played in the speaker.
在一些实施例中,视频处理器270可以包括一颗或多颗芯片组成。音频处理器,也可以包括一颗或多颗芯片组成。In some embodiments, the video processor 270 may include one or more chips. The audio processor may also include one or more chips.
在一些实施例中,视频处理器270和音频处理器280,可以单独的芯片,也可以于控制器一起集成在一颗或多颗芯片中。In some embodiments, the video processor 270 and the audio processor 280 may be separate chips, or may be integrated in one or more chips together with the controller.
在一些实施例中,音频输出,在控制器250的控制下接收音频处理器280输出的声音信号,如:扬声器286,以及除了显示设备200自身携带的扬声器之外,可以输出至外接设备的发生装置的外接音响输出端子,如:外接音响接口或耳机接口等,还可以包括通信接口中的近距离通信模块,例如:用于进行蓝牙扬声器声音输出的蓝牙模块。In some embodiments, the audio output receives the sound signal output by the audio processor 280 under the control of the controller 250, such as the speaker 286, and can output to an external device in addition to the speaker carried by the display device 200 itself. The external audio output terminal of the device, such as an external audio interface or earphone interface, may also include a short-distance communication module in the communication interface, for example, a Bluetooth module for sound output from a Bluetooth speaker.
供电电源290,在控制器250控制下,将外部电源输入的电力为显示设备200提供电源供电支持。供电电源290可以包括安装显示设备200内部的内置电源电路,也可以是安装在显示设备200外部电源,在显示设备200中提供外接电源的电源接口。The power supply 290, under the control of the controller 250, provides power supply support for the display device 200 with power input from an external power supply. The power supply 290 may include a built-in power supply circuit installed inside the display device 200, or may be an external power supply installed in the display device 200, and a power interface for providing an external power supply in the display device 200.
用户接口265,用于接收用户的输入信号,然后,将接收用户输入信号发送给控制器250。用户输入信号可以是通过红外接收器接收的遥控器信号,可以通过网络通信模块接收各种用户控制信号。The user interface 265 is used to receive user input signals, and then send the received user input signals to the controller 250. The user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
在一些实施例中,用户通过控制装置100或移动终端300输入用户命令,用户输入接口则根据用户的输入,显示设备200则通过控制器250响应用户的输入。In some embodiments, the user inputs user commands through the control device 100 or the mobile terminal 300, the user input interface is based on the user's input, and the display device 200 responds to the user's input through the controller 250.
在一些实施例中,用户可在显示器275上显示的图形用户界面(GUI)输入用户命令,则用户输入接口通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口通过传感器识别出声音或手势,来接收用户输入命令。In some embodiments, the user may input a user command on a graphical user interface (GUI) displayed on the display 275, and the user input interface receives the user input command through the graphical user interface (GUI). Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
在一些实施例中,“用户界面”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面常用的表现形式是图形用户界面(Graphic User Interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。In some embodiments, the "user interface" is a medium interface for interaction and information exchange between an application or operating system and a user, and it realizes the conversion between the internal form of information and the form acceptable to the user. The commonly used form of the user interface is the Graphic User Interface (GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It can be an icon, window, control and other interface elements displayed on the display screen of an electronic device. The control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. Visual interface elements.
存储器260,包括存储用于驱动显示设备200的各种软件模块。如:第一存储器中存储的各种软件模块,包括:基础模块、检测模块、通信模块、显示控制模块、浏览器模块、和各种服务模块等中的至少一种。The memory 260 includes storing various software modules used to drive the display device 200. For example, various software modules stored in the first memory include: at least one of a basic module, a detection module, a communication module, a display control module, a browser module, and various service modules.
基础模块用于显示设备200中各个硬件之间信号通信、并向上层模块发送处理和控制信号的底层软件模块。检测模块用于从各种传感器或用户输入接口中收集各种信息,并进行数模转换以及分析管理的管理模块。The basic module is used to communicate signals between various hardware in the display device 200 and send processing and control signals to the lower layer software module of the upper layer module. The detection module is a management module used to collect various information from various sensors or user input interfaces, and perform digital-to-analog conversion and analysis management.
例如,语音识别模块中包括语音解析模块和语音指令数据库模块。显示控制模块用于控制显示器进行显示图像内容的模块,可以用于播放多媒体图像内容和UI界面等信息。通信模块,用于与外部设备之间进行控制和数据通信的模块。浏览器模块,用于执行浏览服务器之间数据通信的模块。服务模块,用于提供各种服务以及各类应用程序在内的模块。同时,存储器260还用存储接收外部数据和用户数据、各种用户界面中各个项目的图像以及焦点对象的视觉效果图等。For example, the voice recognition module includes a voice parsing module and a voice command database module. The display control module is a module for controlling the display to display image content, and can be used to play information such as multimedia image content and UI interface. The communication module is a module used for control and data communication with external devices. The browser module is a module used to perform data communication between browsing servers. The service module is used to provide various services and modules including various applications. At the same time, the memory 260 is also used to store and receive external data and user data, images of various items in various user interfaces, and visual effect diagrams of focus objects, etc.
图3示例性示出了根据示例性实施例中控制设备100的配置框图。如图3所示,控制设备100包括控制器110、通信接口130、用户输入/输出接口、存储器、供电电源。Fig. 3 exemplarily shows a configuration block diagram of the control device 100 according to an exemplary embodiment. As shown in FIG. 3, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface, a memory, and a power supply.
控制设备100被配置为控制显示设备200,以及可接收用户的输入操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起用用户与显示设备200之间交互中介作用。如:用户通过操作控制设备100上频道加减键,显示设备200响应频道加减的操作。The control device 100 is configured to control the display device 200, and can receive input operation instructions from the user, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, so as to serve as an intermediary between the user and the display device 200. For example, the user operates the channel addition and subtraction key on the control device 100, and the display device 200 responds to the channel addition and subtraction operation.
在一些实施例中,控制设备100可是一种智能设备。如:控制设备100可根据用户需求安装控制显示设备200的各种应用。In some embodiments, the control device 100 may be a smart device. For example, the control device 100 can install various applications for controlling the display device 200 according to user requirements.
在一些实施例中,如图1所示,移动终端300或其他智能电子设备,可在安装操控显示设备200的应用之后,可以起到控制设备100类似功能。如:用户可以通过安装应用,在移动终端300或其他智能电子设备上可提供的图形用户界面的各种功能键或虚拟按钮, 以实现控制设备100实体按键的功能。In some embodiments, as shown in FIG. 1, the mobile terminal 300 or other smart electronic device can perform a similar function to control the device 100 after installing an application that controls the display device 200. For example, the user can install various function keys or virtual buttons of the graphical user interface that can be provided on the mobile terminal 300 or other smart electronic devices by installing applications to realize the function of controlling the physical keys of the device 100.
控制器110包括处理器112和RAM 113和ROM 114、通信接口130以及通信总线。控制器用于控制控制设备100的运行和操作,以及内部各部件之间通信协作以及外部和内部的数据处理功能。The controller 110 includes a processor 112, a RAM 113 and a ROM 114, a communication interface 130, and a communication bus. The controller is used to control the operation and operation of the control device 100, as well as the communication and cooperation between internal components, and external and internal data processing functions.
通信接口130在控制器110的控制下,实现与显示设备200之间控制信号和数据信号的通信。如:将接收到的用户输入信号发送至显示设备200上。通信接口130可包括WiFi芯片131、蓝牙模块132、NFC模块133等其他近场通信模块中至少之一种。The communication interface 130 realizes the communication of control signals and data signals with the display device 200 under the control of the controller 110. For example, the received user input signal is sent to the display device 200. The communication interface 130 may include at least one of other near field communication modules such as a WiFi chip 131, a Bluetooth module 132, and an NFC module 133.
用户输入/输出接口140,其中,输入接口包括麦克风141、触摸板142、传感器143、按键144等其他输入接口中至少一者。如:用户可以通过语音、触摸、手势、按压等动作实现用户指令输入功能,输入接口通过将接收的模拟信号转换为数字信号,以及数字信号转换为相应指令信号,发送至显示设备200。The user input/output interface 140, wherein the input interface includes at least one of other input interfaces such as a microphone 141, a touch panel 142, a sensor 143, and a button 144. For example, the user can implement the user instruction input function through voice, touch, gesture, pressing and other actions. The input interface converts the received analog signal into a digital signal and the digital signal into a corresponding instruction signal, and sends it to the display device 200.
输出接口包括将接收的用户指令发送至显示设备200的接口。在一些实施例中,可以红外接口,也可以是射频接口。如:红外信号接口时,需要将用户输入指令按照红外控制协议转化为红外控制信号,经红外发送模块进行发送至显示设备200。再如:射频信号接口时,需将用户输入指令转化为数字信号,然后按照射频控制信号调制协议进行调制后,由射频发送端子发送至显示设备200。The output interface includes an interface for sending the received user instruction to the display device 200. In some embodiments, it may be an infrared interface or a radio frequency interface. For example, in the case of an infrared signal interface, the user input instruction needs to be converted into an infrared control signal according to the infrared control protocol, and then sent to the display device 200 via the infrared sending module. For another example, in the case of a radio frequency signal interface, a user input instruction needs to be converted into a digital signal, which is then modulated according to the radio frequency control signal modulation protocol, and then sent to the display device 200 by the radio frequency sending terminal.
在一些实施例中,控制设备100包括通信接口130和输入输出接口140中至少一者。控制设备100中配置通信接口130,如:WiFi、蓝牙、NFC等模块,可将用户输入指令通过WiFi协议、或蓝牙协议、或NFC协议编码,发送至显示设备200。In some embodiments, the control device 100 includes at least one of a communication interface 130 and an input/output interface 140. The control device 100 is configured with a communication interface 130, such as WiFi, Bluetooth, NFC, etc. modules, which can encode user input instructions to the display device 200 through the WiFi protocol, or the Bluetooth protocol, or the NFC protocol.
存储器190,用于在控制器的控制下存储驱动和控制控制设备200的各种运行程序、数据和应用。存储器190,可以存储用户输入的各类控制信号指令。The memory 190 is used to store various operating programs, data, and applications for driving and controlling the control device 200 under the control of the controller. The memory 190 can store various control signal instructions input by the user.
供电电源180,用于在控制器的控制下为控制设备100各元件提供运行电力支持。可以电池及相关控制电路。The power supply 180 is used to provide operating power support for each element of the control device 100 under the control of the controller. Can battery and related control circuit.
在一些实施例中,系统可以包括内核(Kernel)、命令解析器(shell)、文件系统和应用程序。内核、shell和文件系统一起组成了基本的操作系统结构,它们让用户可以管理文件、运行程序并使用系统。上电后,内核启动,激活内核空间,抽象硬件、初始化硬件参数等,运行并维护虚拟内存、调度器、信号及进程间通信(IPC)。内核启动后,再加载Shell和用户应用程序。应用程序在启动后被编译成机器码,形成一个进程。In some embodiments, the system may include a kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system structure. They allow users to manage files, run programs, and use the system. After power on, the kernel starts, activates the kernel space, abstracts hardware, initializes hardware parameters, etc., runs and maintains virtual memory, scheduler, signals, and inter-process communication (IPC). After the kernel is started, the Shell and user applications are loaded. After the application is started, it is compiled into machine code to form a process.
参见图4,在一些实施例中,将系统分为四层,从上至下分别为应用程序(Applications)层(简称“应用层”),应用程序框架(Application Framework)层(简称“框架层”),安卓运行时(Android runtime)和系统库层(简称“系统运行库层”),以及内核层。Referring to Figure 4, in some embodiments, the system is divided into four layers, from top to bottom, respectively, the application (Applications) layer (referred to as the "application layer"), and the Application Framework layer (referred to as the "framework layer"). "), Android runtime and system library layer (referred to as "system runtime library layer"), and kernel layer.
在一些实施例中,应用程序层中运行有至少一个应用程序,这些应用程序可以是操作系统自带的窗口(Window)程序、系统设置程序、时钟程序、相机应用等;也可以是第三方开发者所开发的应用程序,比如嗨见程序、K歌程序、魔镜程序等。在具体实施时,应用程序层中的应用程序包不限于以上举例,实际还可以包括其它应用程序包,本申请实施例对此不做限制。In some embodiments, there is at least one application program running in the application layer. These applications may be Window programs, system setting programs, clock programs, camera applications, etc., which are included in the operating system; they may also be developed by third parties. The application programs developed by the author, such as Hi Jian program, K song program, magic mirror program, etc. In specific implementation, the application package in the application layer is not limited to the above examples, and may actually include other application packages, which is not limited in the embodiment of the present application.
框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。应用程序框架层相当于一个处理中心,这个中心决定让应用层中的应用程序做出动作。应用程序通过API接口,可在执行中访问系统中的资源和取得系统的服务。The framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer. The application framework layer includes some predefined functions. The application framework layer is equivalent to a processing center, which decides to let applications in the application layer take actions. Through the API interface, the application can access the resources in the system and obtain the services of the system during execution.
如图4所示,本申请实施例中应用程序框架层包括管理器(Managers),内容提供者(Content Provider)等,其中管理器包括以下模块中的至少一个:活动管理器(Activity Manager)用于和系统中正在运行的所有活动进行交互;位置管理器(Location Manager)用于给系统服务或应用提供了系统位置服务的访问;文件包管理器(Package Manager)用于检索当前安装在设备上的应用程序包相关的各种信息;通知管理器(Notification Manager)用于控制通知消息的显示和清除;窗口管理器(Window Manager)用于管理用户界面上的括图标、窗口、工具栏、壁纸和桌面部件。As shown in Figure 4, the application framework layer in the embodiment of the present application includes managers (Managers), content providers (Content Provider), etc., where the manager includes at least one of the following modules: It interacts with all activities running in the system; Location Manager is used to provide system services or applications with access to system location services; Package Manager is used to retrieve current installations on the device Various information related to the application package; Notification Manager is used to control the display and clearing of notification messages; Window Manager is used to manage icons, windows, toolbars, and wallpapers on the user interface And desktop widgets.
在一些实施例中,活动管理器用于:管理各个应用程序的生命周期以及通常的导航回退功能,比如控制应用程序的退出(包括将显示窗口中当前显示的用户界面切换到系统桌面)、打开、后退(包括将显示窗口中当前显示的用户界面切换到当前显示的用户界面的上一级用户界面)等。In some embodiments, the activity manager is used to: manage the life cycle of each application and the usual navigation rollback functions, such as controlling the exit of the application (including switching the user interface currently displayed in the display window to the system desktop), opening , Back (including switching the user interface currently displayed in the display window to the upper level user interface of the currently displayed user interface), etc.
在一些实施例中,窗口管理器用于管理所有的窗口程序,比如获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕,控制显示窗口变化(例如将显示窗口缩小显示、抖动显示、扭曲变形显示等)等。In some embodiments, the window manager is used to manage all window programs, such as obtaining the size of the display screen, determining whether there is a status bar, locking the screen, capturing the screen, and controlling changes in the display window (for example, shrinking the display window, dithering, or distorting the display window). Deformation display, etc.) and so on.
在一些实施例中,系统运行库层为上层即框架层提供支撑,当框架层被使用时,安卓操作系统会运行系统运行库层中包含的C/C++库以实现框架层要实现的功能。In some embodiments, the system runtime layer provides support for the upper layer, that is, the framework layer. When the framework layer is used, the Android operating system will run the C/C++ library included in the system runtime layer to implement functions to be implemented by the framework layer.
在一些实施例中,内核层是硬件和软件之间的层。如图4所示,内核层至少包含以下驱动中的至少一种:音频驱动、显示驱动、蓝牙驱动、摄像头驱动、WIFI驱动、USB驱动、HDMI驱动、传感器驱动(如指纹传感器,温度传感器,触摸传感器、压力传感器等)等。In some embodiments, the kernel layer is a layer between hardware and software. As shown in Figure 4, the kernel layer contains at least one of the following drivers: audio driver, display driver, Bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor, touch Sensors, pressure sensors, etc.) etc.
在一些实施例中,内核层还包括用于进行电源管理的电源驱动模块。In some embodiments, the kernel layer further includes a power drive module for power management.
在一些实施例中,图4中的软件架构对应的软件程序和/或模块存储在图2或图3所示的第一存储器或第二存储器中。In some embodiments, the software programs and/or modules corresponding to the software architecture in FIG. 4 are stored in the first memory or the second memory shown in FIG. 2 or FIG. 3.
在一些实施例中,以魔镜应用(拍照应用)为例,当遥控接收装置接收到遥控器输入操作,相应的硬件中断被发给内核层。内核层将输入操作加工成原始输入事件(包括输入操作的值,输入操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,根据焦点当前的位置识别该输入事件所对应的控件以及以该输入操作是确认操作,该确认操作所对应的控件为魔镜应用图标的控件,魔镜应用调用应用框架层的接口,启动魔镜应用,进而通过调用内核层启动摄像头驱动,实现通过摄像头捕获静态图像或视频。In some embodiments, taking the magic mirror application (photographing application) as an example, when the remote control receiving device receives the remote control input operation, the corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the input operation into the original input event (including the value of the input operation, the time stamp of the input operation and other information). The original input events are stored in the kernel layer. The application framework layer obtains the original input event from the kernel layer, recognizes the control corresponding to the input event according to the current position of the focus, and regards the input operation as a confirmation operation. The control corresponding to the confirmation operation is the control of the magic mirror application icon. The mirror application calls the interface of the application framework layer, starts the magic mirror application, and then starts the camera driver by calling the kernel layer to realize the capture of still images or videos through the camera.
在一些实施例中,对于具备触控功能的显示设备,以分屏操作为例,显示设备接收用户作用于显示屏上的输入操作(如分屏操作),内核层可以根据输入操作产生相应的输入事件,并向应用程序框架层上报该事件。由应用程序框架层的活动管理器设置与该输入操作对应的窗口模式(如多窗口模式)以及窗口位置和大小等。应用程序框架层的窗口管理根据活动管理器的设置绘制窗口,然后将绘制的窗口数据发送给内核层的显示驱动,由显示驱动在显示屏的不同显示区域显示与之对应的应用界面。In some embodiments, for a display device with touch function, taking split-screen operation as an example, the display device receives input operations (such as split-screen operations) that the user acts on the display screen, and the kernel layer can generate corresponding input operations based on the input operations. Enter the event and report the event to the application framework layer. The activity manager of the application framework layer sets the window mode (such as multi-window mode) and the window position and size corresponding to the input operation. The window management of the application framework layer draws the window according to the settings of the activity manager, and then sends the drawn window data to the display driver of the kernel layer, and the display driver displays the corresponding application interface in different display areas of the display screen.
在一些实施例中,如图5中所示,应用程序层包含至少一个应用程序可以在显示器中显示对应的图标控件,如:直播电视应用程序图标控件、视频点播应用程序图标控件、媒体中心应用程序图标控件、应用程序中心图标控件、游戏应用图标控件等。In some embodiments, as shown in FIG. 5, the application layer includes at least one application that can display corresponding icon controls on the display, such as: live TV application icon controls, video on demand application icon controls, media center applications Program icon controls, application center icon controls, game application icon controls, etc.
在一些实施例中,直播电视应用程序,可以通过不同的信号源提供直播电视。例如, 直播电视应用程可以使用来自有线电视、无线广播、卫星服务或其他类型的直播电视服务的输入提供电视信号。以及,直播电视应用程序可在显示设备200上显示直播电视信号的视频。In some embodiments, the live TV application can provide live TV through different signal sources. For example, a live TV application may use input from cable TV, wireless broadcasting, satellite services, or other types of live TV services to provide TV signals. And, the live TV application can display the video of the live TV signal on the display device 200.
在一些实施例中,视频点播应用程序,可以提供来自不同存储源的视频。不同于直播电视应用程序,视频点播提供来自某些存储源的视频显示。例如,视频点播可以来自云存储的服务器端、来自包含已存视频节目的本地硬盘储存器。In some embodiments, video-on-demand applications can provide videos from different storage sources. Unlike live TV applications, VOD provides video display from certain storage sources. For example, video on demand can come from the server side of cloud storage, and from the local hard disk storage that contains stored video programs.
在一些实施例中,媒体中心应用程序,可以提供各种多媒体内容播放的应用程序。例如,媒体中心,可以为不同于直播电视或视频点播,用户可通过媒体中心应用程序访问各种图像或音频所提供服务。In some embodiments, the media center application can provide various multimedia content playback applications. For example, the media center can provide services that are different from live TV or video on demand, and users can access various images or audio through the media center application.
在一些实施例中,应用程序中心,可以提供储存各种应用程序。应用程序可以是一种游戏、应用程序,或某些和计算机系统或其他设备相关但可以在智能电视中运行的其他应用程序。应用程序中心可从不同来源获得这些应用程序,将它们储存在本地储存器中,然后在显示设备200上可运行。In some embodiments, the application center may provide for storing various application programs. The application program may be a game, an application program, or some other application program that is related to a computer system or other equipment but can be run on a smart TV. The application center can obtain these applications from different sources, store them in the local storage, and then run on the display device 200.
显示设备200上不仅可以显示一些普通图片,还可以显示全景图片。全景图片是一种通过广角的表现手段展示的图片,全景图片可以表现更多周围环境的图像。通常全景图片不能完整地显示在显示设备200上,当需要浏览该全景图片当前显示内容以外的内容时,需要对全景图片进行调整,以便将全景图片的其他内容调整到显示设备200的显示区域。The display device 200 can display not only some ordinary pictures, but also panoramic pictures. A panoramic picture is a picture displayed through a wide-angle representation method, and a panoramic picture can express more images of the surrounding environment. Generally, the panoramic picture cannot be completely displayed on the display device 200. When content other than the currently displayed content of the panoramic picture needs to be browsed, the panoramic picture needs to be adjusted so that other contents of the panoramic picture can be adjusted to the display area of the display device 200.
目前,在显示设备200上浏览全景图片时,操作者通常需要使用遥控器对全景图片的显示内容进行调整,例如操作者按下遥控器上的方向键,可以将全景图片向显示设备200屏幕的对应方向上移动,进而显示出相反方向上之前并未显示的内容;或者显示设备200采集操作者的手势,然后根据手势的方向对显示的全景图片内容进行调整,例如操作者的手指在显示设备200屏幕上向右滑动,那么全景图片会在屏幕向右移动,进而显示出之前全景图片左侧并没有显示的内容。At present, when browsing panoramic pictures on the display device 200, the operator usually needs to use the remote control to adjust the display content of the panoramic picture. Move in the corresponding direction to display the content that was not displayed before in the opposite direction; or the display device 200 collects the operator’s gesture, and then adjusts the displayed panoramic image content according to the direction of the gesture, for example, the operator’s finger is on the display device Swipe to the right on the 200 screen, then the panoramic picture will move to the right on the screen, and then display the content that was not displayed on the left side of the panoramic picture before.
然而,无论是上述的遥控器调整还是手势调整,所有全景图片都是按照显示设备200中预先设置的固定步长进行移动,当操作者只想细微地调整某一全景图片的内容时,如果固定步长过大,那么全景图片在显示设备200屏幕上移动的距离就会过大,进而难以满足操作者对于全景图片细微调整的要求。However, whether it is the aforementioned remote control adjustment or gesture adjustment, all panoramic pictures are moved according to a fixed step preset in the display device 200. When the operator only wants to fine-tune the content of a certain panoramic picture, if it is fixed If the step size is too large, the distance the panoramic picture moves on the screen of the display device 200 will be too large, and it is difficult to meet the operator's requirement for fine adjustment of the panoramic picture.
针对上述内容,本申请提供了一种全景图片浏览方法及显示设备,可以通过操作者在显示设备200前的脸部角度变化,控制全景图片在显示设备200上的移动方向和距离,进而将操作者需要看到的全景图片的内容调整到显示设备200的屏幕上。In response to the above content, the present application provides a panoramic picture browsing method and display device, which can control the moving direction and distance of the panoramic picture on the display device 200 through the operator’s face angle change in front of the display device 200, and then operate The content of the panoramic picture that the user needs to see is adjusted to the screen of the display device 200.
在显示设备200中,显示器275可以用来显示全景图片,检测器230可用来采集显示设备200前的人物图像,控制器110可以用来根据检测器230采集的图像识别出目标人物,并且识别出目标人物在显示设备200前的脸部角度信息,还可以计算全景图片的偏移距离并控制全景图片的移动。In the display device 200, the display 275 can be used to display a panoramic picture, the detector 230 can be used to collect an image of a person in front of the display device 200, and the controller 110 can be used to identify the target person based on the image collected by the detector 230 and recognize The face angle information of the target person in front of the display device 200 can also calculate the offset distance of the panoramic picture and control the movement of the panoramic picture.
图6为本申请实施例示出的一种显示设备200与操作者之间的交互示意图。如图6所示,在采用本申请实施例提供的全景图片浏览方法浏览全景图片时,操作者需要站在显示设备200的前方,操作者通过转动自己的头来控制全景图片在显示设备200上的移动,进而显示出自己想要看的内容。FIG. 6 is a schematic diagram of interaction between a display device 200 and an operator according to an embodiment of the application. As shown in FIG. 6, when browsing panoramic pictures using the panoramic picture browsing method provided by the embodiment of the present application, the operator needs to stand in front of the display device 200, and the operator controls the panoramic picture on the display device 200 by turning his head. Move to show what you want to watch.
图7为本申请实施例示出的一种全景图片浏览方法的流程图。如图7所示,该全景图片浏览方法具体包括如下步骤:Fig. 7 is a flowchart of a panoramic picture browsing method shown in an embodiment of the application. As shown in Figure 7, the panoramic picture browsing method specifically includes the following steps:
步骤S101,在显示设备200显示全景图片的情况下,识别显示设备200前的目标人物,所述目标人物用于表示在显示设备200前浏览全景图片的操作者。In step S101, when the display device 200 displays a panoramic picture, a target person in front of the display device 200 is identified, and the target person is used to indicate an operator who browses the panoramic picture in front of the display device 200.
本实施例中所说的目标人物具体指实际操作者,比如显示设备200前有多个人在同时浏览全景图片,为了避免多人共同控制造成的混乱现象,显示设备200只能接收一个人的操作,那么这个人可以作为目标人物。本实施例中,通常都是先利用人脸识别的方式识别出显示设备前的人物,再具体判断该人物是否是目标人物。选择目标人物的方式并不唯一,比如可以选择距离显示设备200最近的人作为目标人物,此时需要对比显示设备200前每个人脸的像素面积,通常距离显示设备200越近,人脸的像素面积越大;或者选择特定的某个人作为目标人物,此时需要对比显示设备200前的人脸中是否有某个特定的人脸。The target person mentioned in this embodiment specifically refers to the actual operator. For example, multiple people in front of the display device 200 are viewing panoramic pictures at the same time. In order to avoid confusion caused by the joint control of multiple people, the display device 200 can only receive operations from one person. , Then this person can be the target person. In this embodiment, generally, the person in front of the display device is identified first by means of face recognition, and then it is specifically determined whether the person is the target person. The way to select the target person is not unique. For example, the person closest to the display device 200 can be selected as the target person. In this case, the pixel area of each face in front of the display device 200 needs to be compared. Generally, the closer to the display device 200, the pixels of the face The larger the area; or if a specific person is selected as the target person, it is necessary to compare whether there is a specific face among the faces in front of the display device 200.
所以,在一些实施例中,识别显示设备200前的目标人物的步骤具体可以包括:Therefore, in some embodiments, the step of identifying the target person in front of the display device 200 may specifically include:
步骤S201,检测显示设备200前是否存在预设人物,所述预设人物用于表示显示设备中预先存储的浏览全景图片的操作者。Step S201: Detect whether there is a preset person in front of the display device 200, and the preset person is used to represent an operator who browses a panoramic picture pre-stored in the display device.
这里所说的预设人物可以代表上述所指的特定人物,通常,每个显示设备200中都会保存一个预设人物。这个预设人物可以是初始化的时候设置好的,在显示设备200第一次浏览全景图片时可以识别显示设备200前有没有这个预设人物;另外,预设人物也可以是前一次全景图片浏览时最后保存的目标人物,在显示设备200当前再次浏览全景图片时,可以前一次保存的目标人物作为预设人物使用,识别出显示设备200前是否存在这个预设人物。The preset person mentioned here may represent the specific person referred to above. Generally, each display device 200 stores a preset person. The preset character can be set during initialization. When the display device 200 browses the panoramic picture for the first time, it can be recognized whether there is this preset character in front of the display device 200; in addition, the preset character can also be the previous panoramic picture browse. When the last saved target person is, when the display device 200 currently browses the panoramic picture again, the target person saved last time can be used as the preset person, and it is recognized whether the preset person exists before the display device 200.
步骤S202,如果存在,则确定显示设备200前的预设人物为目标人物。In step S202, if it exists, it is determined that the preset person in front of the display device 200 is the target person.
当然,在一些情况下,浏览全景图片的操作者不一定是预设人物,这时需要重新确定一个目标人物,以便对全景图片进行操作。因此,在一些实施例中,识别显示设备200前的目标人物的步骤还可以包括:Of course, in some cases, the operator who browses the panoramic picture may not be a preset character, and a target character needs to be re-determined in order to operate on the panoramic picture. Therefore, in some embodiments, the step of identifying the target person in front of the display device 200 may further include:
步骤S301,在显示设备200前同时存在多个人物并且不存在预设人物的情况下,分别计算出每个人物的人脸对应的像素面积。In step S301, in the case that there are multiple characters in front of the display device 200 and there is no preset character, the pixel area corresponding to the face of each character is calculated respectively.
本实施例中,在多个人物中确定目标人物时,选择距离显示设备200最近的人物。通常人物与显示设备200的距离远近是通过人物的人脸像素面积进行判断的,人物距离显示设备200越近,其人脸像素面积越大。在显示设备200中,在检测器230采集到显示设备200前的图像时,控制器110可以识别出属于人物的人脸图像,再进一步计算出每个人脸图像对应的像素面积。In this embodiment, when a target person is determined among a plurality of persons, the person closest to the display device 200 is selected. Generally, the distance between a person and the display device 200 is determined by the pixel area of the person's face. The closer the person is to the display device 200, the larger the pixel area of the person's face. In the display device 200, when the detector 230 collects an image in front of the display device 200, the controller 110 can identify the face images belonging to the person, and then further calculate the pixel area corresponding to each face image.
步骤S302,选择像素面积最大的人脸对应的人物作为目标人物。In step S302, the person corresponding to the face with the largest pixel area is selected as the target person.
图8为本申请实施例示出的一种检测器230采集图像的示意图。在图8中,检测器230采集的显示设备200前的图像中一共有两个人物,控制器110可以识别出站在最前面的人物的人脸像素面积最大,进而确定该人物为目标人物。图8中的虚线框为控制器110识别出的目标人物的人脸范围。FIG. 8 is a schematic diagram of a detector 230 collecting images according to an embodiment of the application. In FIG. 8, there are two persons in the image in front of the display device 200 collected by the detector 230. The controller 110 can recognize that the person standing in the front has the largest face pixel area, and then determine that the person is the target person. The dashed frame in FIG. 8 is the face range of the target person recognized by the controller 110.
值得说明的是,如果显示设备200前只有一个人,那么可以认为就是这个人在进行全景图片的浏览,此时无论这个人是不是特定的人,控制器110都可以将这个人作为目标人物,进而对全景图片的移动进行控制。It is worth noting that if there is only one person in front of the display device 200, then it can be considered that this person is browsing the panoramic pictures. At this time, whether this person is a specific person or not, the controller 110 can regard this person as the target person. And then control the movement of the panoramic picture.
步骤S102,获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度。Step S102: Obtain the initial face angle and the current face angle of the target person before and after the first preset duration.
本实施例中,需要根据目标人物脸部角度的变化对全景图片进行调整,脸部的角度具有一定的变化时间,这个时间就是第一预设时长。In this embodiment, the panoramic picture needs to be adjusted according to the change in the angle of the target person's face. The angle of the face has a certain change time, and this time is the first preset duration.
控制器110可以在识别出目标人物之后,获取到目标人物在显示设备200前的脸部转动角度。这个角度可以参照航天航空领域的三个角度,图9为本申请实施例示出的飞行器角度示意图,如图9所示,在航天航空领域,对飞机等飞行器进行控制,通常基于翻滚角、俯仰角和偏航角实现。本实施例中也采用翻滚角、俯仰角和偏航角的概念对目标人物的脸部角度进行定义,其中,翻滚角就是指脸部旋转的角度,俯仰角是指脸部抬起或者伏下的角度,偏航角是指脸部在水平方向上转动的角度。The controller 110 may obtain the face rotation angle of the target person in front of the display device 200 after recognizing the target person. This angle can refer to the three angles in the aerospace field. FIG. 9 is a schematic diagram of the aircraft angle shown in an embodiment of the application. As shown in FIG. And the yaw angle is achieved. In this embodiment, the concepts of roll angle, pitch angle, and yaw angle are also used to define the face angle of the target person, where the roll angle refers to the angle at which the face rotates, and the pitch angle refers to the up or down of the face. The yaw angle refers to the angle at which the face rotates in the horizontal direction.
如果目标人物需要浏览全景图片,那么第一预设时长前后的初始脸部角度和当前脸部角度一般是不相同的。初始脸部角度中具体包括脸部的三个角度,分别是初始翻滚角度、初始俯仰角度和初始偏航角度;当前脸部角度中也具体包括脸部的三个角度,分别是当前翻滚角度、当前俯仰角度和当前偏航角度。If the target person needs to browse the panoramic picture, the initial face angle before and after the first preset duration and the current face angle are generally different. The initial face angle specifically includes the three angles of the face, which are the initial roll angle, the initial pitch angle, and the initial yaw angle; the current face angle also specifically includes the three angles of the face, which are the current roll angle, Current pitch angle and current yaw angle.
那么,在一些实施例中,识别出目标人物之后,则需要获取显示设备200前的目标人物的初始翻滚角度、初始俯仰角度和初始偏航角度,然后在经过第一预设时长之后,再次获取目标人物的当前翻滚角度、当前俯仰角度和当前偏航角度。Then, in some embodiments, after identifying the target person, it is necessary to obtain the initial roll angle, initial pitch angle, and initial yaw angle of the target person in front of the display device 200, and then obtain again after the first preset time period has elapsed. The current roll angle, current pitch angle, and current yaw angle of the target person.
步骤S103,利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离。Step S103: Obtain the offset distance of the panoramic picture by using the initial face angle and the current face angle.
通常,操作者在需要操作全景图片时,在第一预设时长之后,脸部的角度会发生变化,当前脸部角度相对于初始脸部角度会产生的一定的偏移,本实施例中就是利用这个偏移来获得全景图片对应的偏移距离。Generally, when the operator needs to manipulate the panoramic picture, the angle of the face will change after the first preset time period, and the current face angle will have a certain offset from the initial face angle. In this embodiment, it is Use this offset to obtain the offset distance corresponding to the panoramic picture.
图10为本申请实施例示出的另一种显示设备200与操作者之间的交互示意图,如图10所示,显示设备200在显示全景图片时,只能显示出与显示器275屏幕一样大的内容,操作者如果想浏览屏幕上未显示的全景图片右侧的内容,可以面对显示设备200并向右转动脸部,显示设备200会根据操作者转动脸部的角度计算出全景图片向左偏移的距离,再控制全景图片向左偏移,进而在显示设备200上显示出更多的右侧的内容。FIG. 10 is a schematic diagram of another interaction between a display device 200 and an operator according to an embodiment of the application. As shown in FIG. 10, when the display device 200 displays a panoramic picture, it can only display a screen as large as the screen of the display 275. If the operator wants to browse the content on the right side of the panoramic picture that is not displayed on the screen, he can face the display device 200 and turn his face to the right. The display device 200 will calculate the direction of the panoramic picture according to the angle at which the operator turns the face. By the distance of the left offset, the panoramic picture is then controlled to shift to the left, and then more content on the right side is displayed on the display device 200.
在实际操作场景中,操作者转动脸部的操作并不是绝对的向右或者向左,其向下或者向下均会有一定的偏移。再有,全景图片在显示器275上的移动也并不是绝对的向左右或者向上下移动,操作者也可以根据自己的浏览需求,向特定方向转动脸部以浏览特定方向的内容,例如,操作者向右上方向转动脸部,全景图片向左下反向移动进而显示出右上方向的内容。所以本实施例中需要对脸部转动的三个角度进行检测和计算,才能获得更加准确的全景图片偏移距离。In the actual operation scene, the operation of the operator to turn the face is not absolutely right or left, and it will have a certain offset downwards or downwards. Furthermore, the movement of the panoramic picture on the display 275 is not absolute to the left or right or up and down. The operator can also turn his face in a specific direction to browse content in a specific direction according to his own browsing needs. For example, the operator Rotate the face to the upper right direction, and the panoramic picture moves in the reverse direction to the lower left to display the content in the upper right direction. Therefore, in this embodiment, it is necessary to detect and calculate the three angles of the face rotation to obtain a more accurate offset distance of the panoramic picture.
步骤S104,按照所述偏移距离,调整所述全景图片在显示设备200上的显示内容。Step S104: Adjust the display content of the panoramic picture on the display device 200 according to the offset distance.
通常,在显示设备200上显示的全景图片时一个二维图片,该二维图片移动的方向包括水平和垂直两个方向。因此,计算出的偏移距离包括全景图片在水平方向上的偏移距离和在垂直方向上的偏移距离。具体的调整方式也不仅仅限于全景图片移动的方向与操作者脸部转动的方向相反,使得全景图片移动的方向与操作者脸部转动的方向相同也可以实现本实施例中浏览全景图片的目的。Generally, the panoramic picture displayed on the display device 200 is a two-dimensional picture, and the direction in which the two-dimensional picture moves includes two directions, horizontal and vertical. Therefore, the calculated offset distance includes the offset distance of the panoramic picture in the horizontal direction and the offset distance in the vertical direction. The specific adjustment method is not limited to the direction of the panoramic picture movement being opposite to the direction of the operator's face rotation, making the direction of the panoramic picture movement the same as the direction of the operator's face rotation can also achieve the purpose of browsing the panoramic picture in this embodiment. .
另外,在上述步骤中识别出的目标人物的人脸可以显示在显示器275的屏幕上,这样,目标人物可以时刻观察自己的脸部角度,进而结合全景图片移动的幅度适度地对转动角度做出调整,以实现全景图片的适度调整,具体的显示位置可以设置在屏幕右上角等。对于显示设备200前另外一些人物而言,观察到屏幕上的目标人物之后,他们也会知道谁是实际操作者,这样也不会距离显示设备200太近,以免影响全景图片的浏览。In addition, the face of the target person identified in the above steps can be displayed on the screen of the display 275, so that the target person can always observe the angle of his face, and then appropriately determine the rotation angle based on the range of the panoramic picture movement. Adjust to achieve a proper adjustment of the panoramic picture, the specific display position can be set in the upper right corner of the screen, etc. For some other characters in front of the display device 200, after observing the target person on the screen, they will also know who the actual operator is, so that they will not be too close to the display device 200, so as not to affect the viewing of panoramic pictures.
由上述内容可知,本申请实施例中的全景图片浏览方法能够通过显示设备前目标人物 的脸部角度变化实现对全景图片的浏览,并且全景图片的偏移距离可以通过目标人物脸部转动的角度大小确定,进而操作者可以根据自己的需求自行浏览全景图片,无需再按照固定的移动步长对全景图片进行调整。It can be seen from the above content that the panoramic picture browsing method in the embodiment of the present application can realize the browsing of the panoramic picture by displaying the face angle change of the target person in front of the device, and the offset distance of the panoramic picture can be determined by the angle of the target person’s face rotation. The size is determined, so that the operator can browse the panoramic picture by himself according to his own needs, and there is no need to adjust the panoramic picture according to a fixed moving step.
在一些实施例中,利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离的步骤包括:In some embodiments, the step of obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle includes:
步骤S401,利用初始翻滚角度、当前翻滚角度、初始偏航角度和当前偏航角度计算所述全景图片的水平偏移距离。Step S401: Calculate the horizontal offset distance of the panoramic picture by using the initial roll angle, the current roll angle, the initial yaw angle and the current yaw angle.
在一些实施例中,可以利用如下公式计算所述全景图片的水平偏移距离:In some embodiments, the following formula may be used to calculate the horizontal offset distance of the panoramic picture:
Figure PCTCN2021081562-appb-000001
Figure PCTCN2021081562-appb-000001
其中,Offset X表示所述全景图片的水平偏移距离,Roll 1和Roll 2分别表示初始翻滚角度和当前翻滚角度,Yaw 1和Yaw 2分别表示初始偏航角度和当前偏航角度,A1表示矫正系数,delta X表示水平方向最小调整角度值,Time表示最小调整偏移时间。 Among them, Offset X represents the horizontal offset distance of the panoramic picture, Roll 1 and Roll 2 represent the initial roll angle and the current roll angle, Yaw 1 and Yaw 2 represent the initial yaw angle and the current yaw angle, and A1 represents the correction Coefficient, delta X represents the minimum adjustment angle value in the horizontal direction, and Time represents the minimum adjustment offset time.
步骤S402,利用初始俯仰角度、当前俯仰角度、初始翻滚角度和当前翻滚角度计算所述全景图片的垂直偏移距离。Step S402: Calculate the vertical offset distance of the panoramic picture by using the initial pitch angle, the current pitch angle, the initial roll angle and the current roll angle.
在一些实施例中,可以利用如下公式计算所述全景图片的垂直偏移距离:In some embodiments, the following formula may be used to calculate the vertical offset distance of the panoramic picture:
Figure PCTCN2021081562-appb-000002
Figure PCTCN2021081562-appb-000002
其中,Offset Y表示所述全景图片的垂直偏移距离,Roll 1和Roll 2分别表示初始翻滚角度和当前翻滚角度,Pitch 1和Pitch 2分别表示初始俯仰角度和当前俯仰角度,A2表示矫正系数,delta Y表示垂直方向最小调整角度值,Time表示最小调整偏移时间。 Among them, Offset Y represents the vertical offset distance of the panoramic picture, Roll 1 and Roll 2 represent the initial roll angle and the current roll angle, Pitch 1 and Pitch 2 represent the initial pitch angle and the current pitch angle, and A2 represents the correction coefficient. delta Y represents the minimum adjustment angle value in the vertical direction, and Time represents the minimum adjustment offset time.
本实施例中,delta X和delta Y需要根据全景图片的分辨率大小而确定,通常全景图片的分辨率越大,delta X和delta Y的角度越大;Time需要根据第一预设时长而确定,通常第一预设时长越长,Time越长。A1和A2的取值通常小于0.5。 In this embodiment, delta X and delta Y need to be determined according to the resolution of the panoramic picture. Generally, the larger the resolution of the panoramic picture, the larger the angle of delta X and delta Y ; Time needs to be determined according to the first preset duration , Generally, the longer the first preset duration, the longer the Time. The values of A1 and A2 are usually less than 0.5.
图11为本申请实施例示出的控制器110获得的目标人物初始脸部角度的示意图,图12为本申请实施例示出的控制器110获得的目标人物当前脸部角度的示意图,在图11和图12中,虚线框中的内容表示控制器110识别出的目标人物的脸部范围。图11中,目标人物脸部的初始翻滚角度Roll 1为-3.2°,初始偏航角度Yaw 1为4.2°,初始俯仰角度Pitch 1为-20.9°;图12中,目标人物脸部的当前翻滚角度Roll 2为-0.4°,当前偏航角度Yaw 2为20.5°,当前俯仰角度Pitch 2为-2.0°。以delta X和delta Y分别取值0.5°、Time取值2S,A1和A2分别取值0.25和0.15为例,控制器110可以分别计算出全景图片的水平偏移距离为: FIG. 11 is a schematic diagram of the initial face angle of the target person obtained by the controller 110 according to an embodiment of the application, and FIG. 12 is a schematic diagram of the current face angle of the target person obtained by the controller 110 according to an embodiment of the application. In FIG. 12, the content in the dashed frame represents the face range of the target person recognized by the controller 110. In Figure 11, the initial roll angle of the target person’s face Roll 1 is -3.2°, the initial yaw angle Yaw 1 is 4.2°, and the initial pitch angle Pitch 1 is -20.9°; in Fig. 12, the current roll of the target person’s face The angle Roll 2 is -0.4°, the current yaw angle Yaw 2 is 20.5°, and the current pitch angle Pitch 2 is -2.0°. Taking the values of delta X and delta Y as 0.5°, Time as 2S, and A1 and A2 as 0.25 and 0.15 respectively, the controller 110 can respectively calculate the horizontal offset distance of the panoramic image as:
Figure PCTCN2021081562-appb-000003
Figure PCTCN2021081562-appb-000003
再计算出全景图片的垂直偏移距离为:Then calculate the vertical offset distance of the panoramic picture as:
Figure PCTCN2021081562-appb-000004
Figure PCTCN2021081562-appb-000004
可见,当目标人物的人脸角度从(-3.2°,4.2°,-20.9°)变化到(-0.4°,20.5°,-2.0°)时,该全景图片的偏移距离为(1.71875,1.40875),控制器110最终可以控 制全景图片在水平方向上和垂直方向上移动相应的距离,进而显示出全景图片的更多内容。It can be seen that when the face angle of the target person changes from (-3.2°, 4.2°, -20.9°) to (-0.4°, 20.5°, -2.0°), the offset distance of the panoramic image is (1.71875, 1.40875) ), the controller 110 can finally control the panoramic picture to move a corresponding distance in the horizontal direction and the vertical direction, thereby displaying more content of the panoramic picture.
另外,为了能够明确全景图片的移动方向和距离,在一些实施例中,控制器110还可以预先确定好显示器275屏幕上的坐标系,可以以屏幕四个顶点为坐标原点建立二维直角坐标系,也可以以屏幕正中心为坐标原点建立二维直角坐标系,再以建立好的坐标系为基础,对全景图片进行移动。In addition, in order to be able to clarify the moving direction and distance of the panoramic picture, in some embodiments, the controller 110 may also pre-determine the coordinate system on the screen of the display 275, and may establish a two-dimensional rectangular coordinate system with the four vertices of the screen as the coordinate origin. , You can also establish a two-dimensional rectangular coordinate system with the exact center of the screen as the coordinate origin, and then move the panoramic picture based on the established coordinate system.
在一些实施例中,识别显示设备200前的目标人物之后,还包括以下步骤:In some embodiments, after identifying the target person in front of the display device 200, the following steps are further included:
步骤S501,如果在预设时长之后没有识别出显示设备200前的目标人物,则重新计算显示设备200前当前存在的每个人脸对应的像素面积。In step S501, if the target person in front of the display device 200 is not recognized after a preset period of time, the pixel area corresponding to each human face currently existing in front of the display device 200 is recalculated.
在一些情况下,可能最初操作全景图片浏览的操作者中途退出操作,但是其他操作者仍需要继续浏览。此时,检测器230无法再次识别到最初的目标人物,需要重新确定新的目标人物。确定新的目标人物的方式如前述内容所述,也可以通过对显示设备200前所有人物的人脸像素面积进行判断,进而选择距离显示设备200最近的人物作为目标人物。In some cases, the operator who initially operated the panoramic picture browsing may quit the operation midway, but other operators still need to continue browsing. At this time, the detector 230 cannot recognize the original target person again, and a new target person needs to be determined again. The method for determining the new target person is as described in the foregoing, and the face pixel area of all the characters in front of the display device 200 can also be judged, and then the person closest to the display device 200 is selected as the target person.
步骤S502,选择像素面积最大的人脸对应的人物作为目标人物。In step S502, the person corresponding to the face with the largest pixel area is selected as the target person.
通常情况下,人物距离显示设备200越近,其人脸像素面积越大。在显示设备200中,在检测器230采集到显示设备200前的图像时,控制器110可以识别出属于人物的人脸图像,再进一步计算出每个人脸图像对应的像素面积。Generally, the closer the person is to the display device 200, the larger the pixel area of the person's face. In the display device 200, when the detector 230 collects an image in front of the display device 200, the controller 110 can identify the face images belonging to the person, and then further calculate the pixel area corresponding to each face image.
步骤S503,获取所述目标人物的初始脸部角度。Step S503: Obtain the initial face angle of the target person.
步骤S504,在第二预设时长之后,获取所述目标人物的当前脸部角度。Step S504: After a second preset time period, obtain the current face angle of the target person.
本实施例中的第二预设时长与第一预设时长相比,本质上都是一种预设的时长,但是第二预设时长的时间范围要比第一预设时长的时间范围大,比如,第一预设时长为T1,那么第二预设时长可以为2T1。在实际场景中,通常将第一预设时长T1设置为0.3秒。Compared with the first preset duration, the second preset duration in this embodiment is essentially a preset duration, but the time range of the second preset duration is larger than the time range of the first preset duration For example, if the first preset duration is T1, then the second preset duration may be 2T1. In actual scenes, the first preset duration T1 is usually set to 0.3 seconds.
在一些实施例中,获得了全景图片的偏移距离之后,为了保证操作者的操作体验以及保证全景图片的调整效果,还可以在等待一定的时长之后再对全景图片进行调整。这个时长既需要保证全景图片调整的效率,也需要保证全景图片的调整效果,通常设置为2秒。In some embodiments, after obtaining the offset distance of the panoramic picture, in order to ensure the operating experience of the operator and ensure the adjustment effect of the panoramic picture, the panoramic picture may also be adjusted after waiting for a certain period of time. This length of time needs to ensure the efficiency of the panoramic picture adjustment and the adjustment effect of the panoramic picture. It is usually set to 2 seconds.
在一些实施例中,全景图片的浏览方法不仅需要对目标人物进行识别、对目标人物是否发生改变进行判断,还要对当前浏览的全景图片是否改变进行判断,如果在按照偏移距离,调整全景图片在显示设备200上的显示内容之后,控制器110检测到显示器275当前显示的全景图片不是之前的那一副,那么需要针对当前新的全景图片重新进行上述实施例中的目标人物识别等过程,进而对新的全景图片进行调整;如果在按照偏移距离,调整全景图片在显示设备200上的显示内容之后,控制器110检测到显示器275当前显示的全景图片仍为之前的图片,那么控制器110则可以直接识别显示设备200前的目标人物的脸部角度变化,以实现下一次全景图片的调整。In some embodiments, the panoramic picture browsing method not only needs to recognize the target person and determine whether the target person has changed, but also determine whether the currently browsed panoramic picture has changed. If the panoramic picture is adjusted according to the offset distance, After the picture is displayed on the display device 200, the controller 110 detects that the panoramic picture currently displayed on the display 275 is not the previous one, and then the process of identifying the target person in the above embodiment needs to be performed again for the current new panoramic picture. , And then adjust the new panoramic picture; if after adjusting the display content of the panoramic picture on the display device 200 according to the offset distance, the controller 110 detects that the panoramic picture currently displayed on the display 275 is still the previous picture, then controls The device 110 can directly recognize the face angle change of the target person in front of the display device 200, so as to realize the next adjustment of the panoramic picture.
由以上技术方案可知,本申请实施例提供了一种全景图片浏览方法,在显示设备200显示全景图片时,可以识别出显示设备200前的目标人物,再分别获取目标人物的初始脸部角度以及预设时长后的当前脸部角度,利用初始脸部角度和当前脸部角度计算出显示设备200上的全景图片需要移动的偏移距离,最后根据偏移距离调整全景图片,使得全景图片之前未显示的内容显示在显示设备200上。本申请的方案能够通过显示设备200前目标人物的脸部角度变化实现对全景图片的浏览,并且全景图片的偏移距离可以通过目标人物脸部转动的角度大小确定,进而操作者可以根据自己的需求自行浏览全景图片,无需再按照固定的移动步长对全景图片进行调整。It can be seen from the above technical solutions that the embodiment of the present application provides a panoramic picture browsing method. When the display device 200 displays the panoramic picture, the target person in front of the display device 200 can be recognized, and then the initial face angle and the target person’s initial face angle and The current face angle after the preset duration is used to calculate the offset distance that the panoramic picture on the display device 200 needs to move by using the initial face angle and the current face angle. Finally, the panoramic picture is adjusted according to the offset distance so that the panoramic picture has not been previously The displayed content is displayed on the display device 200. The solution of the present application can realize the browsing of the panoramic picture through the change of the face angle of the target person in front of the display device 200, and the offset distance of the panoramic picture can be determined by the angle of the target person’s face rotation, and the operator can follow his own You need to browse the panoramic pictures by yourself, no need to adjust the panoramic pictures according to the fixed moving step.
本申请还提供了一种显示设备200,包括:显示器275;检测器230,用于采集显示设备200前的图像;控制器110,被配置为:在显示设备200显示全景图片的情况下,识别显示设备200前的目标人物,所述目标人物用于表示在显示设备200前浏览全景图片的操作者;获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度;利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离;按照所述偏移距离,调整所述全景图片在显示设备200上的显示内容。The present application also provides a display device 200, including: a display 275; a detector 230 for collecting an image in front of the display device 200; and a controller 110 configured to: recognize when the display device 200 displays a panoramic picture Display the target person in front of the device 200, where the target person is used to indicate the operator who browses the panoramic picture in front of the display device 200; obtain the initial face angle and the current face angle of the target person before and after the first preset duration; The offset distance of the panoramic picture is obtained by using the initial face angle and the current face angle; and the display content of the panoramic picture on the display device 200 is adjusted according to the offset distance.
在一些实施例中,控制器110,还被配置为:检测显示设备200前是否存在预设人物,所述预设人物用于表示显示设备200中预先存储的浏览全景图片的操作者;如果存在,则确定显示设备200前的预设人物为目标人物。In some embodiments, the controller 110 is further configured to detect whether there is a preset person in front of the display device 200, and the preset person is used to represent the operator who browses the panoramic picture pre-stored in the display device 200; if there is , It is determined that the preset person in front of the display device 200 is the target person.
在一些实施例中,控制器110,还被配置为:在显示设备200前同时存在多个人物并且不存在预设人物的情况下,分别计算出每个人物的人脸对应的像素面积;选择像素面积最大的人脸对应的人物作为目标人物。In some embodiments, the controller 110 is further configured to: when multiple characters exist in front of the display device 200 and there are no preset characters, respectively calculate the pixel area corresponding to the face of each character; select The person corresponding to the face with the largest pixel area is the target person.
在一些实施例中,控制器110,还被配置为:获取显示设备200前目标人物脸部的初始翻滚角度、初始俯仰角度和初始偏航角度;在第一预设时长过后,获取显示设备200前目标人物脸部的当前翻滚角度、当前俯仰角度和当前偏航角度。In some embodiments, the controller 110 is further configured to: obtain the initial roll angle, the initial pitch angle, and the initial yaw angle of the face of the target person in front of the display device 200; after the first preset time period has elapsed, obtain the display device 200 The current roll angle, current pitch angle, and current yaw angle of the front target person's face.
在一些实施例中,控制器110,还被配置为:利用初始翻滚角度、当前翻滚角度、初始偏航角度和当前偏航角度计算所述全景图片的水平偏移距离;利用初始俯仰角度、当前俯仰角度、初始翻滚角度和当前翻滚角度计算所述全景图片的垂直偏移距离。In some embodiments, the controller 110 is further configured to calculate the horizontal offset distance of the panoramic picture using the initial roll angle, the current roll angle, the initial yaw angle, and the current yaw angle; The pitch angle, the initial roll angle and the current roll angle are used to calculate the vertical offset distance of the panoramic picture.
在一些实施例中,控制器110,还被配置为:利用如下公式计算所述全景图片的水平偏移距离:In some embodiments, the controller 110 is further configured to calculate the horizontal offset distance of the panoramic picture by using the following formula:
Figure PCTCN2021081562-appb-000005
Figure PCTCN2021081562-appb-000005
其中,Offset X表示所述全景图片的水平偏移距离,Roll 1和Roll 2分别表示初始翻滚角度和当前翻滚角度,Yaw 1和Yaw 2分别表示初始偏航角度和当前偏航角度,A1表示矫正系数,delta X表示水平方向最小调整角度值,Time表示最小调整偏移时间。 Among them, Offset X represents the horizontal offset distance of the panoramic picture, Roll 1 and Roll 2 represent the initial roll angle and the current roll angle, Yaw 1 and Yaw 2 represent the initial yaw angle and the current yaw angle, and A1 represents the correction Coefficient, delta X represents the minimum adjustment angle value in the horizontal direction, and Time represents the minimum adjustment offset time.
在一些实施例中,控制器110,还被配置为:利用如下公式计算所述全景图片的垂直偏移距离:In some embodiments, the controller 110 is further configured to calculate the vertical offset distance of the panoramic picture by using the following formula:
Figure PCTCN2021081562-appb-000006
Figure PCTCN2021081562-appb-000006
其中,Offset Y表示所述全景图片的垂直偏移距离,Roll 1和Roll 2分别表示初始翻滚角度和当前翻滚角度,Pitch 1和Pitch 2分别表示初始俯仰角度和当前俯仰角度,A2表示矫正系数,delta Y表示垂直方向最小调整角度值,Time表示最小调整偏移时间。 Among them, Offset Y represents the vertical offset distance of the panoramic picture, Roll 1 and Roll 2 represent the initial roll angle and the current roll angle, Pitch 1 and Pitch 2 represent the initial pitch angle and the current pitch angle, and A2 represents the correction coefficient. delta Y represents the minimum adjustment angle value in the vertical direction, and Time represents the minimum adjustment offset time.
在一些实施例中,控制器110,还被配置为:如果在预设时长之后没有识别出显示设备200前的目标人物,则重新计算显示设备200前当前存在的每个人脸对应的像素面积;选择像素面积最大的人脸对应的人物作为目标人物;获取所述目标人物的初始脸部角度;在第二预设时长之后,获取所述目标人物的当前脸部角度。In some embodiments, the controller 110 is further configured to: if the target person in front of the display device 200 is not recognized after a preset period of time, recalculate the pixel area corresponding to each face currently existing in the display device 200; The person corresponding to the face with the largest pixel area is selected as the target person; the initial face angle of the target person is obtained; after a second preset time period, the current face angle of the target person is obtained.
为了实现用户在距离较远的位置使用显示设备时,用户依然能够看清控件的显示内容,本发明实施例提供的显示设备,可在用户在使用显示设备时移动位置的过程中,控制控件能够跟随用户的移动而调整位置,例如,如果用户在显示设备前方向左移动 时,控件跟随用户的移动,也在显示界面中向左移动相应位置,使得用户与控件的视角不变,进而保证用户无论位于显示设备的哪一个方位均能够以一定的视角看清控件的显示内容。In order to realize that when the user uses the display device at a far distance, the user can still see the display content of the control clearly, the display device provided by the embodiment of the present invention can control the control when the user moves the position while using the display device. Adjust the position according to the user's movement. For example, if the user moves to the left in front of the display device, the control follows the user's movement and moves the corresponding position to the left in the display interface, so that the perspective of the user and the control remains unchanged, thereby ensuring the user Regardless of the position of the display device, the display content of the control can be clearly seen from a certain angle of view.
具体地,本发明实施例提供的一种显示设备,包括:控制器,以及,分别与控制器通信的显示器和摄像头。摄像头被配置为采集环境图像数据,环境图像数据用于表征用户相对于显示器的位置参数,摄像头将采集的环境图像数据发送至控制器,控制器即可获得用户的位置参数;位置参数包括用户与显示器之间的垂直距离和用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置;人脸框是摄像头在采集位于显示设备前方的用户图像时落在用户脸部的标定框,人脸框中心点可为人脸框的中心位置或者用户两个瞳孔之间的中心位置。显示器被配置为呈现显示界面,显示界面中显示有目标控件,目标控件可为通知、弹框或浮窗等。Specifically, a display device provided by an embodiment of the present invention includes a controller, and a display and a camera respectively communicating with the controller. The camera is configured to collect environmental image data. The environmental image data is used to characterize the user's position parameters relative to the display. The camera sends the collected environmental image data to the controller, and the controller can obtain the user's position parameters; the position parameters include user and The vertical distance between the displays and the position of the user's face frame on the display interface when the center point of the user's face frame falls vertically on the display; the face frame is where the camera falls on the user's face when the camera captures the image of the user in front of the display device For the calibration frame, the center point of the face frame can be the center position of the face frame or the center position between the two pupils of the user. The display is configured to present a display interface, and a target control is displayed in the display interface. The target control can be a notification, a pop-up frame, or a floating window.
为使显示设备具备动态调整控件位置的功能,可在控制器中配置实现动态调整控件位置功能的控制键,在需要显示设备控制显示页面上的控件跟随用户的移动而调整位置时,可提前开启控制键,使得显示设备具备动态调整控件位置的功能。如果未开启控制键,则控件正常显示,在用户移动的时候,控件并不跟随用户的移动调整位置。In order to make the display device have the function of dynamically adjusting the position of the control, the control key that realizes the function of dynamically adjusting the position of the control can be configured in the controller. When the control on the display page needs to be controlled by the display device to adjust the position following the movement of the user, it can be turned on in advance The control key enables the display device to dynamically adjust the position of the control. If the control key is not turned on, the control is displayed normally, and when the user moves, the control does not adjust the position according to the user's movement.
图13中示例性示出了根据实施例中动态调整控件的方法的流程图。参见图13,本发明实施例提供的显示设备,在实现控件的动态调整时,控制器基于人脸识别和距离检测算法实现控制,具体地,控制器被配置为执行下述步骤:FIG. 13 exemplarily shows a flowchart of a method for dynamically adjusting a control according to an embodiment. Referring to FIG. 13, in the display device provided by the embodiment of the present invention, when realizing the dynamic adjustment of the control, the controller realizes the control based on the face recognition and distance detection algorithm. Specifically, the controller is configured to perform the following steps:
S1、在用户由初始位置移动至结束位置的过程中,获取目标控件的初始位置参数,以及,接收摄像头采集的初始位置对应的环境图像数据和结束位置对应的环境图像数据。S1. During the process of the user moving from the initial position to the end position, obtain the initial position parameter of the target control, and receive the environmental image data corresponding to the initial position and the end position collected by the camera.
用户开启控制键,使得显示设备具备动态调整控件位置的功能后,控制器实时获取摄像头采集的环境图像数据。如果使用显示设备的用户出现位置改变,则将位置改变前的位置作为用户的初始位置,将位置改变后的位置作为用户的结束位置。After the user turns on the control key, the display device has the function of dynamically adjusting the position of the control, and the controller obtains the environmental image data collected by the camera in real time. If the position of the user using the display device changes, the position before the position change is taken as the initial position of the user, and the position after the position change is taken as the end position of the user.
由于摄像头实时采集显示设备前方的环境图像数据,因此,可获得用户的初始位置对应的环境图像数据和结束位置对应的环境图像数据。不同位置对应的环境图像数据可以表征用户与显示器之间不同的相对距离和用户的人脸框中心点垂直落在显示器上时,所处显示界面上的不同位置。Since the camera collects the environmental image data in front of the display device in real time, the environmental image data corresponding to the user's initial position and the environmental image data corresponding to the end position can be obtained. The environmental image data corresponding to different positions can represent different relative distances between the user and the display and different positions on the display interface when the center point of the user's face frame falls vertically on the display.
为实现控件的位置调整,摄像头采集的环境图像数据中需包括用户的人脸。在一些实施例中,控制器对环境图像数据进行人脸数量的识别,且在只识别出一个人脸时,继续执行后续的动态调整控件的方法。In order to adjust the position of the control, the user's face must be included in the environmental image data collected by the camera. In some embodiments, the controller recognizes the number of human faces on the environmental image data, and when only one human face is recognized, continues to execute the subsequent method of dynamically adjusting the control.
具体地,控制器进一步被配置为:接收摄像头采集的环境图像数据;识别环境图像数据中的人脸数量;在环境图像数据中的人脸数量为1时,执行获取用户初始位置参数和用户结束位置参数的步骤。Specifically, the controller is further configured to: receive the environmental image data collected by the camera; recognize the number of faces in the environmental image data; when the number of faces in the environmental image data is 1, execute the acquisition of user initial position parameters and user end Positional parameter steps.
在仅有一个用户使用显示设备时,若执行动态调整控件的方法调整控件的位置,根据一个用户的位置参数控制控件调整位置,可提高准确率。若人脸数量为多个,控制器可控制目标控件正常显示,不执行动态调整控件的方法。When only one user uses the display device, if the method of dynamically adjusting the control is executed to adjust the position of the control, and the position of the control is controlled according to a user's position parameter, the accuracy can be improved. If the number of faces is more than one, the controller can control the normal display of the target control without executing the method of dynamically adjusting the control.
在另一些实施例中,如果同一张环境图像数据中存在多个用户的人脸,控制器也可以择一的方式选择其中一个用户为目标跟随用户,将该目标跟随用户作为控制控件调整位置的依据。In other embodiments, if there are multiple user faces in the same environmental image data, the controller can also choose one of the users as the target follower, and the target follower is used as the control control to adjust the position. in accordance with.
图14中示例性示出了根据实施例中获取目标控件的初始位置参数的方法流程图;图15中示例性示出了根据实施例中参考坐标系的示意图。为实现目标控件的位置调整,需先确定目标的初始位置,参见图14,控制器在执行获取目标控件的初始位置参数,进一步被配置为执行下述步骤:FIG. 14 exemplarily shows a flowchart of a method for obtaining the initial position parameter of a target control according to an embodiment; FIG. 15 exemplarily shows a schematic diagram of a reference coordinate system according to an embodiment. In order to adjust the position of the target control, the initial position of the target needs to be determined first. See Figure 14. The controller is executed to obtain the initial position parameters of the target control, and is further configured to perform the following steps:
S121、以显示界面的左上角作为坐标原点,以显示界面由左侧向右侧的方向作为X轴正向,以显示界面由上侧向下侧的方向作为Y轴正向,建立参考坐标系。S121. Use the upper left corner of the display interface as the origin of the coordinates, the direction from the left to the right of the display interface as the positive direction of the X axis, and the direction from the upper side to the lower side of the display interface as the positive direction of the Y axis to establish a reference coordinate system .
为准确确定目标控件的初始位置参数,本实施例中,可在显示界面中建立参考坐标系。参见图15,参考坐标系的坐标原点O设定在显示界面的左上角,X轴正向为显示界面由左侧向右侧的方向,Y轴正向为显示界面由上侧向下侧的方向。In order to accurately determine the initial position parameters of the target control, in this embodiment, a reference coordinate system can be established in the display interface. Referring to Figure 15, the coordinate origin O of the reference coordinate system is set at the upper left corner of the display interface, the positive direction of the X axis is the direction from the left to the right of the display interface, and the positive direction of the Y axis is the direction from the top to the bottom of the display interface. direction.
S122、获取坐标原点的像素点数和目标控件的控件中心点的横向像素点数和纵向像素点数。S122: Obtain the number of pixels at the origin of the coordinates and the number of horizontal pixels and the number of vertical pixels at the control center point of the target control.
目标控件的初始位置参数可以坐标值进行表示,而横纵坐标值可根据目标控件的控件中心点的像素点来计算。The initial position parameter of the target control can be represented by coordinate values, and the horizontal and vertical coordinate values can be calculated according to the pixel points of the control center point of the target control.
为此,根据显示设备的系统属性,控制器可获取到当前显示设备的分辨率,进而可确定坐标原点的像素点数和目标控件的控件中心点的像素点数。由于坐标原点位于显示界面的最左侧,因此,可等效确定坐标原点的像素点数为0个。To this end, according to the system properties of the display device, the controller can obtain the resolution of the current display device, and then can determine the number of pixels at the origin of the coordinate and the number of pixels at the control center point of the target control. Since the coordinate origin is located on the leftmost side of the display interface, the number of pixels at which the coordinate origin can be determined equivalently is 0.
为准确表示目标控件的初始位置参数,本实施例中,以目标控件的控件中心点M的坐标位置来表示。为此,分别获取目标控件的控件中心点M的横向像素点数和纵向像素点数,横向像素点数是指目标控件的控件中心点M与坐标原点O之间在X轴方向上包含的像素点的数量,纵向像素点数是指目标控件的控件中心点M与坐标原点O之间在Y轴方向上包含的像素点的数量。In order to accurately represent the initial position parameter of the target control, in this embodiment, the coordinate position of the control center point M of the target control is used to represent. To this end, the number of horizontal pixels and the number of vertical pixels of the control center point M of the target control are respectively obtained. The number of horizontal pixels refers to the number of pixels contained in the X-axis direction between the control center point M of the target control and the coordinate origin O. , The number of vertical pixels refers to the number of pixels contained in the Y-axis direction between the control center point M of the target control and the coordinate origin O.
S123、计算坐标原点的像素点数和控件中心点的横向像素点数的横向像素点数差,以及,坐标原点的像素点数和控件中心点的纵向像素点数的纵向像素点数差。S123: Calculate the difference in the number of horizontal pixels between the number of pixels at the origin of the coordinates and the number of horizontal pixels at the center of the control, and the difference between the number of pixels at the origin of the coordinates and the number of vertical pixels at the center of the control.
坐标原点O的像素点数为0,对应坐标原点的坐标为(0,0)。控件中心点的横向像素点数为P 1,控件中心点的纵向像素点数为P 2,对应控件中心点的像素点坐标为(P 1,P 2)。 The number of pixels of the coordinate origin O is 0, and the coordinates of the corresponding coordinate origin are (0, 0). The number of horizontal pixels of the control center point is P 1 , the number of vertical pixels of the control center point is P 2 , and the pixel coordinates of the corresponding control center point are (P 1 , P 2 ).
横向像素点数差M 1=控件中心点的横向像素点数-坐标原点的像素点数=P 1-0=P 1The difference in the number of horizontal pixels M 1 =the number of horizontal pixels at the center of the control-the number of pixels at the origin of the coordinate=P 1 -0=P 1 .
纵向像素点数差M 2=控件中心点的纵向像素点数-坐标原点的像素点数=P 2-0=P 2Longitudinal number of pixels difference between the number of pixels M longitudinal center point of control = 2 - the number of pixels to the coordinate origin = P 2 -0 = P 2.
S124、根据横向像素点数差、纵向像素点数差和每个像素点的长度值,计算目标控件的控件中心点与坐标原点之间的横向初始距离和纵向初始距离,将横向初始距离、纵向初始距离、控件中心点的横向像素点数和纵向像素点数作为目标控件的初始位置参数。S124. According to the difference in the number of horizontal pixels, the difference in the number of vertical pixels, and the length of each pixel, calculate the horizontal initial distance and the vertical initial distance between the control center point of the target control and the coordinate origin, and calculate the horizontal initial distance and the vertical initial distance , The number of horizontal and vertical pixels at the center of the control is used as the initial position parameter of the target control.
对于固定分辨率的显示设备,其显示界面所包括的像素点数量是一定的,即一种分辨率对应一组像素点数量。因此,可获取相邻两个像素点差值,即每个像素点的长度值。若像素点为正方形,则像素点的长度值和宽度值相同。由像素点数差乘以每个像素点的长度值,即可获得对应的距离。For a fixed-resolution display device, the number of pixels included in its display interface is certain, that is, one resolution corresponds to a set of pixel numbers. Therefore, the difference between two adjacent pixels can be obtained, that is, the length value of each pixel. If the pixel is square, the length and width of the pixel are the same. By multiplying the difference in the number of pixels by the length of each pixel, the corresponding distance can be obtained.
横向初始距离L 1=横向像素点数差×像素点的长度值=M 1×n,即可确定目标控件的控件中心点与坐标原点之间的横向初始距离。 The horizontal initial distance L 1 =the difference in the number of horizontal pixels×the length value of the pixel point=M 1 ×n, the horizontal initial distance between the control center point of the target control and the coordinate origin can be determined.
纵向初始距离L 2=纵向像素点数差×像素点的长度值=M 2×n,即可确定目标控件的控件中心点与坐标原点之间的纵向初始距离。 Longitudinal initial distance L 2 = difference in number of longitudinal pixels × length value of pixel point = M 2 × n, the initial longitudinal distance between the control center point of the target control and the coordinate origin can be determined.
由横向初始距离和纵向初始距离确定目标控件的控件中心点的坐标,即控件中心点的坐标为(L 1,L 2);由横向像素点数和纵向像素点数确定目标控件的控件中心点的像素点坐标,即控件中心点的像素点坐标为(P 1,P 2)。将控件中心点的坐标和控件中心点的像素点坐标作为目标控件的初始位置参数。 The coordinates of the control center point of the target control are determined by the horizontal initial distance and the vertical initial distance, that is, the coordinates of the control center point are (L 1 , L 2 ); the pixels of the control center point of the target control are determined by the number of horizontal and vertical pixels Point coordinates, that is, the pixel coordinates of the center point of the control are (P 1 , P 2 ). The coordinates of the control center point and the pixel coordinates of the control center point are used as the initial position parameters of the target control.
例如,以60英寸显示设备为例,其长边长度固定约为135.5cm,对于常用的分辨率为1080P的显示界面,其对应的像素点间差(像素点的长度值)约为0.00071m。如果目标控件的控件中心点的像素点坐标为(720P,480P),则可确定横向初始距离L 1=720×0.00071=0.5112m,纵向初始距离L 2=480×0.00071=0.3408m,即确定目标控件的控件中心点的坐标为(0.5112m,0.3408m)。 For example, taking a 60-inch display device as an example, the length of its long side is fixed at about 135.5cm. For a commonly used display interface with a resolution of 1080P, the corresponding pixel difference (the length of the pixel) is about 0.00071m. If the pixel coordinate of the control center point of the target control is (720P, 480P), then the horizontal initial distance L 1 =720×0.00071=0.112m and the vertical initial distance L 2 =480×0.00071=0.3408m can be determined, that is, determine the target The coordinates of the control center point of the control are (0.5112m, 0.3408m).
S2、获取初始位置对应的环境图像数据中携带的用户初始位置参数,以及,结束位置对应的环境图像数据中携带的用户结束位置参数。S2. Acquire the user initial position parameter carried in the environmental image data corresponding to the initial position, and the user end position parameter carried in the environmental image data corresponding to the end position.
由于摄像头采集的环境图像数据中可表征用户与显示器之间的垂直距离和用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置,因此,控制器可根据获取的环境图像数据中直接调取用户初始位置参数和用户结束位置参数。Since the environmental image data collected by the camera can represent the vertical distance between the user and the display and the position of the user’s face frame on the display interface when the center point of the user’s face frame falls vertically on the display, therefore, the controller can be based on the acquired environment The user's initial position parameter and the user's end position parameter are directly retrieved from the image data.
图16中示例性示出了根据实施例中初始位置对应的环境图像数据的示意图;图17中示例性示出了根据实施例中结束位置对应的环境图像数据的示意图。FIG. 16 exemplarily shows a schematic diagram of the environmental image data corresponding to the initial position in the embodiment; FIG. 17 exemplarily shows a schematic diagram of the environmental image data corresponding to the end position in the embodiment.
参见图16,用户在显示设备前方的初始位置时,控制器可以直接从对应的环境图像数据中获取用户与显示器之间的垂直距离(相对距离),例如,图16中所示的1.70m。参见图17,用户由初始位置移动到结束位置时,控制器可直接从结束位置对应的环境图像数据中获取用户与显示器之间的垂直距离(相对距离),例如,图17中所示的2.24m。Referring to FIG. 16, when the user is in the initial position in front of the display device, the controller can directly obtain the vertical distance (relative distance) between the user and the display from the corresponding environmental image data, for example, 1.70 m as shown in FIG. 16. Referring to Fig. 17, when the user moves from the initial position to the end position, the controller can directly obtain the vertical distance (relative distance) between the user and the display from the environmental image data corresponding to the end position, for example, 2.24 shown in Fig. 17 m.
图18中示例性示出了根据实施例中用户移动过程中人脸框中心点落在显示界面的位置变化示意图。参见图18,用户位于初始位置A时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置为点X,AX连线与显示界面垂直。用户移动至结束位置B时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置为点N,BN连线与显示界面垂直。FIG. 18 exemplarily shows a schematic diagram of the position change of the center point of the face frame on the display interface during the movement of the user according to the embodiment. Referring to Figure 18, when the user is at the initial position A, when the center point of the user's face frame falls vertically on the display, the position on the display interface is point X, and the AX line is perpendicular to the display interface. When the user moves to the end position B, when the center point of the user's face frame falls vertically on the display, the position on the display interface is point N, and the BN connection is perpendicular to the display interface.
为此,用户在初始位置A时,连线AX为用户与显示器之间的垂直距离(相对距离),点X为用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置,因此,连线AX和点X即可确定用户初始位置参数。用户移动至结束位置B时,连线BN为用户与显示器之间的垂直距离(相对距离),点N为用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置,因此,连线BN和点N即可确定用户结束位置参数。For this reason, when the user is at the initial position A, the line AX is the vertical distance (relative distance) between the user and the display, and point X is when the center point of the user's face frame falls vertically on the display. Position, therefore, the user's initial position parameter can be determined by connecting AX and point X. When the user moves to the end position B, the line BN is the vertical distance (relative distance) between the user and the display, and point N is the position on the display interface when the center point of the user's face frame falls vertically on the display. Therefore, connecting BN and point N can determine the user end position parameter.
S3、基于用户初始位置参数和用户结束位置参数,计算目标控件的偏移量,偏移量用于表征目标控件在调整位置时的移动参数。S3. Calculate the offset of the target control based on the user's initial position parameter and the user's end position parameter, where the offset is used to characterize the movement parameter of the target control when adjusting the position.
为了保证用户观看显示设备时的视角不变,即用户查看目标控件显示内容时的视角不变,无论用户所处任何位置,其都能清晰地查看目标控件的显示内容,本实施例提供的显示设备,需在用户移动位置时,控制目标控件跟随用户的移动而调整位置。In order to ensure that the user’s viewing angle when viewing the display device remains the same, that is, the user’s viewing angle when viewing the display content of the target control remains the same, no matter where the user is, he can clearly view the display content of the target control, the display provided in this embodiment The device needs to control the target control to follow the user's movement to adjust the position when the user moves the position.
因此,为准确确定目标控件在调整位置时的参数,可根据用户初始位置参数和用户结束位置参数,计算目标控件的偏移量,由偏移量确定目标控件所需移动的位置参数。Therefore, in order to accurately determine the parameters of the target control when adjusting the position, the offset of the target control can be calculated according to the user's initial position parameter and the user's end position parameter, and the position parameter that the target control needs to move is determined by the offset.
用户初始位置参数包括用户相对于显示器的初始相对距离和初始位置参数,用户 结束位置参数包括用户相对于显示器的结束相对距离和结束位置参数,用户对应的位置参数是指人脸框中心点的参数。初始位置参数是指用户在初始位置时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置。结束位置参数是指用户移动至结束位置时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置。The user’s initial position parameters include the user’s initial relative distance to the display and initial position parameters. The user’s end position parameters include the user’s end relative distance to the display and the end position parameters. The user’s corresponding position parameters refer to the parameters of the center point of the face frame. . The initial position parameter refers to the position on the display interface when the center point of the user's face frame falls vertically on the display when the user is in the initial position. The end position parameter refers to the position on the display interface where the center point of the user's face frame falls vertically on the display when the user moves to the end position.
图19中示例性示出了根据实施例中计算目标控件的偏移量的方法流程图。参见图19,控制器在执行基于用户初始位置参数和用户结束位置参数,计算目标控件的偏移量的过程中,进一步被配置为执行下述步骤:Fig. 19 exemplarily shows a flow chart of the method for calculating the offset of the target control according to the embodiment. Referring to Figure 19, the controller is further configured to perform the following steps in the process of calculating the offset of the target control based on the user's initial position parameter and the user's end position parameter:
S31、计算用户对应的初始位置参数与目标控件的初始位置参数之间的第一距离,以及,用户对应的结束位置参数与目标控件的初始位置参数之间的第二距离,目标控件对应的位置参数是指控件中心点的参数。S31. Calculate the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control, and the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control, and the position corresponding to the target control The parameter refers to the parameter of the center point of the control.
第一距离是指用户在初始位置A时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置X与目标控件的控件中心点M之间的平面距离;第二距离是指用户移动到结束位置B时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置N与目标控件的控件中心点M之间的平面距离。The first distance refers to the planar distance between the position X on the display interface and the control center point M of the target control when the center point of the user's face frame falls vertically on the display when the user is at the initial position A; The distance refers to the planar distance between the position N on the display interface and the control center point M of the target control when the center point of the user's face frame falls vertically on the display when the user moves to the end position B.
以用户由初始位置平移至结束位置为例,第一距离(XM连线)代表沿显示界面的横向平面距离,第二距离(NM连线)代表沿显示界面的横向平面距离。本实施例中,第一距离和第二距离可由人脸框中心点与目标控件的控件中心点之间的像素点差进行计算。Taking the user's translation from the initial position to the end position as an example, the first distance (XM line) represents the horizontal plane distance along the display interface, and the second distance (NM line) represents the horizontal plane distance along the display interface. In this embodiment, the first distance and the second distance can be calculated by the pixel point difference between the center point of the face frame and the control center point of the target control.
具体地,控制器在执行计算用户对应的初始位置参数与目标控件的初始位置参数之间的第一距离,进一步被配置为:Specifically, when the controller executes the calculation of the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control, it is further configured to:
步骤311、获取用户位于初始位置时的人脸框中心点的像素点数和目标控件的控件中心点的像素点数。Step 311: Obtain the number of pixels at the center point of the face frame when the user is at the initial position and the number of pixels at the control center point of the target control.
步骤312、计算人脸框中心点的像素点数和控件中心点的像素点数的像素点数差。Step 312: Calculate the difference between the number of pixels at the center of the face frame and the number of pixels at the center of the control.
步骤313、根据像素点数差和每个像素点的长度值,计算用户对应的初始位置参数与目标控件的初始位置参数之间的第一距离。Step 313: Calculate the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control according to the difference in the number of pixels and the length value of each pixel.
用户位于初始位置时的人脸框中心点的像素点数可由控制器从初始位置对应的环境图像数据中获取,目标控件的控件中心点的像素点数可根据系统属性获取。两个像素点数均可在参考坐标系中读取到相应的像素点坐标。The number of pixels at the center point of the face frame when the user is at the initial position can be obtained by the controller from the environmental image data corresponding to the initial position, and the number of pixels at the control center point of the target control can be obtained according to the system properties. Both pixel points can read the corresponding pixel point coordinates in the reference coordinate system.
以用户由初始位置平移至结束位置为例,人脸框中心点的像素点数在Y轴方向上未发生变化,因此,可根据初始位置对应的人脸框中心点的像素点数和控件中心点的像素点数,即根据人脸框中心点的横向像素点数和控件中心点的横向像素点数,计算像素点数差。像素点差和第一距离的具体计算方法可参照前述实施例中提供的步骤S121至S124的内容,此处不再赘述。Taking the user's translation from the initial position to the end position as an example, the number of pixels at the center point of the face frame does not change in the Y-axis direction. Therefore, the number of pixels at the center point of the face frame corresponding to the initial position and the control center point The number of pixels, that is, the difference in the number of pixels is calculated based on the number of horizontal pixels at the center of the face frame and the number of horizontal pixels at the center of the control. For the specific calculation method of the pixel point difference and the first distance, please refer to the content of steps S121 to S124 provided in the foregoing embodiment, which will not be repeated here.
例如,用户在初始位置A时,人脸框中心点X像素点数为480P,而目标控件的控件中心点的像素点数为720P,因此,确定像素点数差为720P-480P=240P。For example, when the user is at the initial position A, the number of pixels at the center of the face frame X is 480P, and the number of pixels at the center of the control of the target control is 720P. Therefore, the difference in the number of pixels is determined to be 720P-480P=240P.
根据每个像素点的长度值0.00071m,即可确定第一距离S 1为240×0.00071=0.1704m。 The length value of each pixel 0.00071m, to determine a first distance S 1 is at 240 × 0.00071 = 0.1704m.
控制器在执行计算用户对应的结束位置参数与目标控件的初始位置参数之间的第二距离,进一步被配置为:The controller performs the calculation of the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control, which is further configured to:
步骤321、获取用户位于结束位置时的人脸框中心点的像素点数和目标控件的控 件中心点的像素点数。Step 321: Obtain the number of pixels at the center point of the face frame when the user is at the end position and the number of pixels at the center point of the control of the target control.
步骤322、计算人脸框中心点的像素点数和控件中心点的像素点数的像素点数差。Step 322: Calculate the difference between the number of pixels at the center of the face frame and the number of pixels at the center of the control.
步骤323、根据像素点数差和每个像素点的长度值,计算用户对应的结束位置参数与目标控件的初始位置参数之间的第二距离。Step 323: Calculate the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control according to the difference in the number of pixels and the length value of each pixel.
以用户由初始位置平移至结束位置为例,人脸框中心点的像素点数在Y轴方向上未发生变化,因此,可根据结束位置对应的人脸框中心点的像素点数和控件中心点的像素点数,即根据人脸框中心点的横向像素点数和控件中心点的横向像素点数,计算像素点数差。像素点差和第二距离的具体计算方法可参照前述实施例中提供的步骤S121至S124的内容,此处不再赘述。Taking the user's translation from the initial position to the end position as an example, the number of pixels at the center point of the face frame has not changed in the Y-axis direction. Therefore, the number of pixels at the center point of the face frame corresponding to the end position and the control center point The number of pixels, that is, the difference in the number of pixels is calculated based on the number of horizontal pixels at the center of the face frame and the number of horizontal pixels at the center of the control. The specific calculation method of the pixel point difference and the second distance can refer to the content of steps S121 to S124 provided in the foregoing embodiment, which will not be repeated here.
例如,用户在结束位置B时,人脸框中心点X像素点数为360P,而目标控件的控件中心点的像素点数为720P,因此,确定像素点数差为720P-360P=360P。For example, when the user is at the end position B, the number of pixels at the center point of the face frame X is 360P, and the number of pixels at the center point of the control of the target control is 720P. Therefore, the difference in the number of pixels is determined to be 720P-360P=360P.
根据每个像素点的长度值0.00071m,即可确定第二距离S 2为360×0.00071=0.2556m。 According to the length value of each pixel point of 0.00071m, the second distance S 2 can be determined to be 360×0.00071=0.2556m.
S32、基于初始相对距离、结束相对距离和第一距离,计算用户在移动至结束位置时的理论第二距离,理论第二距离用于表征用户对应的结束位置与目标控件的终止位置之间的理论距离。S32. Calculate the theoretical second distance when the user moves to the end position based on the initial relative distance, the end relative distance, and the first distance. The theoretical second distance is used to characterize the distance between the end position corresponding to the user and the end position of the target control. Theoretical distance.
由于用户移动位置后与目标控件的控件中心点的第二距离无法保证用户在终止位置时,用户观看目标控件的视角与用户位于初始位置时观看目标控件的视角相同,因此,需要将目标控件的位置进行调整,即需确定出用户移动至结束位置时能够以同样视角观看到目标控件所需的理论第二距离。Since the second distance from the control center point of the target control after the user moves the position cannot guarantee that when the user is at the end position, the user’s viewing angle of the target control is the same as the viewing angle of the target control when the user is at the initial position. Therefore, it is necessary to change the target control’s To adjust the position, it is necessary to determine the theoretical second distance required for the user to view the target control with the same angle of view when the user moves to the end position.
本实施例中,控制器按照下述基于初始相对距离、结束相对距离和第一距离,计算用户在移动至结束位置时的理论第二距离:In this embodiment, the controller calculates the theoretical second distance when the user moves to the end position based on the initial relative distance, the end relative distance, and the first distance as follows:
S 2'=BN·S 1/AX; S 2 '=BN·S 1 /AX;
式中,S 2'为理论第二距离,S 1为第一距离,AX为初始相对距离,BN为结束相对距离。 In the formula, S 2 ′ is the theoretical second distance, S 1 is the first distance, AX is the initial relative distance, and BN is the ending relative distance.
图20中示例性示出了根据实施例中确定理论第二距离时的示意图。参见图18和13,为保证用户在移动过程中观看目标控件的视角一致,需要人脸框中心点与控件中心点的连线与人脸中心点与显示界面的连线所呈的夹角相同,即α和β相同,α为用户在初始位置时,人脸框中心点与控件中心点的连线与人脸中心点与显示界面的连线所呈的夹角,即AM连线与AX连线的夹角;β为用户移动至结束位置时,人脸框中心点与控件中心点的连线与人脸中心点与显示界面的连线所呈的夹角,即BM连线与BN连线的夹角。FIG. 20 exemplarily shows a schematic diagram when the theoretical second distance is determined according to the embodiment. Refer to Figures 18 and 13, in order to ensure that the user's viewing angle of the target control is consistent during the movement, the line between the center point of the face frame and the center point of the control must be the same as the line between the center point of the face and the display interface. , That is, α and β are the same, α is the angle between the line between the center point of the face frame and the center point of the control and the line between the center point of the face and the display interface when the user is in the initial position, that is, the AM line and AX The angle of the connection; β is the angle formed by the connection between the center of the face frame and the center of the control and the connection between the center of the face and the display interface when the user moves to the end position, that is, the BM connection and BN The angle of the connection.
为使α=β,需tan(α)=tan(β),即S 1/AX=S 2’/BN,即可计算出理论第二距离S 2'=BN·S 1/AX。 In order to make α=β, tan(α)=tan(β), that is, S 1 /AX=S 2 '/BN, then the theoretical second distance S 2 '=BN·S 1 /AX can be calculated.
S33、计算理论第二距离和第二距离的距离差,获得目标控件的偏移量。S33. Calculate the distance difference between the theoretical second distance and the second distance to obtain the offset of the target control.
理论第二距离为用户对应的结束位置与目标控件的终止位置M’之间的理论距离,因此,根据理论第二距离和第二距离的距离差,获得目标控件的偏移量Offset。The theoretical second distance is the theoretical distance between the end position corresponding to the user and the end position M'of the target control. Therefore, the offset Offset of the target control is obtained according to the distance difference between the theoretical second distance and the second distance.
Offset=S 2-S 2'。 Offset=S 2 -S 2 '.
偏移量可实现目标控件的控件中心点M向点M’移动的距离。The offset can realize the distance from the control center point M of the target control to the point M'.
S4、基于目标控件的初始位置参数和目标控件的偏移量,获得目标控件的终止位 置参数,将目标控件移动至终止位置参数对应的位置。S4. Based on the initial position parameter of the target control and the offset of the target control, the end position parameter of the target control is obtained, and the target control is moved to the position corresponding to the end position parameter.
根据目标控件的初始位置参数和目标控件的偏移量,即可确定目标控件需要调整位置后的终止位置,根据终止位置参数,实现目标控件的位置调整。According to the initial position parameter of the target control and the offset of the target control, it is possible to determine the end position of the target control after the position needs to be adjusted, and realize the position adjustment of the target control according to the end position parameter.
图21中示例性示出了根据实施例中动态调整控件位置时的第一示意图。参见图21,点M为目标控件的初始位置参数,点M’为目标控件的终止位置参数。将目标控件由点M移动至点M’,实现用户由初始位置A移动至结束位置B时目标控件的位置调整。此时,目标控件的终止位置参数=初始位置参数—偏移量。FIG. 21 exemplarily shows the first schematic diagram when the position of the control is dynamically adjusted according to the embodiment. Referring to Figure 21, point M is the initial position parameter of the target control, and point M'is the end position parameter of the target control. Move the target control from point M to point M'to realize the position adjustment of the target control when the user moves from the initial position A to the end position B. At this time, the end position parameter of the target control = initial position parameter-offset.
上述实施例是以用户由初始位置A平移至结束位置B时,实现目标控件的位置调整的情况。而在实际应用中,用户在移动过程中,还可能存在竖向方向的移动,即用户由站立状态改变为坐着的状态,此时,在竖向方向上,用户在Y轴方向也存在位置改变。The foregoing embodiment is based on a situation in which the position adjustment of the target control is realized when the user moves from the initial position A to the end position B. In practical applications, the user may also move in the vertical direction during the movement, that is, the user changes from a standing state to a sitting state. At this time, in the vertical direction, the user also has a position in the Y-axis direction. Change.
为适应用户在X轴方向和Y轴方向均发生变化的情况,本发明实施例提供的显示设备,在确定目标控件的偏移量时,需分别确定横向偏移量和纵向偏移量。例如,用户由站立在显示设备的正前方的状态,改变为坐在左后方的椅子上时,需控制目标控件由初始位置向左下角移动。In order to adapt to the situation where the user changes both in the X-axis direction and the Y-axis direction, the display device provided by the embodiment of the present invention needs to determine the horizontal offset and the vertical offset when determining the offset of the target control. For example, when the user changes from a state of standing directly in front of the display device to sitting on a chair at the rear left, the target control needs to be controlled to move from the initial position to the lower left corner.
此时,用户初始位置参数包括横向初始位置参数和纵向初始位置参数,用户结束位置参数包括横向结束位置参数和纵向结束位置参数。横向初始位置参数包括用户在初始位置相对于显示器的横向初始相对距离和横向初始位置参数,纵向初始位置参数包括用户在初始位置相对于显示器的纵向初始相对距离和纵向初始位置参数。横向结束位置参数包括用户在结束位置相对于显示器的横向初始相对距离和横向初始位置参数,纵向结束位置参数包括用户在结束位置相对于显示器的纵向初始相对距离和纵向初始位置参数。At this time, the user initial position parameter includes a horizontal initial position parameter and a vertical initial position parameter, and the user end position parameter includes a horizontal end position parameter and a vertical end position parameter. The horizontal initial position parameter includes the horizontal initial relative distance and the horizontal initial position parameter of the user in the initial position relative to the display, and the vertical initial position parameter includes the vertical initial relative distance and the vertical initial position parameter of the user in the initial position relative to the display. The horizontal end position parameter includes the horizontal initial relative distance and the horizontal initial position parameter of the user at the end position relative to the display, and the vertical end position parameter includes the vertical initial relative distance and the vertical initial position parameter of the user at the end position relative to the display.
纵向相对距离(包括初始位置和结束位置)是指人脸框中心点沿Y轴移动时对应的距离,即用户站立时人脸框中心点的高度与用户坐下时向人脸框中心点的高度的高度差。纵向位置参数(包括初始位置和结束位置)是指用户纵向移动至结束位置时,用户的人脸框中心点垂直落在显示器上时,所处显示界面上的位置。The vertical relative distance (including the initial position and the end position) refers to the corresponding distance when the center point of the face frame moves along the Y axis, that is, the height of the center point of the face frame when the user is standing and the center point of the face frame when the user sits down The height difference. The vertical position parameter (including the initial position and the end position) refers to the position on the display interface when the center point of the user's face frame falls vertically on the display when the user moves vertically to the end position.
具体地,控制器在执行基于用户初始位置参数和用户结束位置参数,计算目标控件的偏移量,进一步被配置为:Specifically, the controller calculates the offset of the target control based on the user's initial position parameter and the user's end position parameter, and is further configured to:
步骤701、基于横向初始位置参数和横向结束位置参数,计算目标控件的横向偏移量。Step 701: Calculate the lateral offset of the target control based on the lateral initial position parameter and the lateral end position parameter.
步骤702、基于纵向初始位置参数和纵向结束位置参数,计算目标控件的纵向偏移量。Step 702: Calculate the longitudinal offset of the target control based on the longitudinal initial position parameter and the longitudinal end position parameter.
在计算目标控件的横向偏移量和纵向偏移量时,可参照前述实施例提供的步骤S3中所述的全部内容进行计算,即可实现由横向初始位置参数和横向结束位置参数,计算目标控件的横向偏移量,以及,由纵向初始位置参数和纵向结束位置参数,计算目标控件的纵向偏移量,具体计算过程此处不再赘述。When calculating the horizontal offset and vertical offset of the target control, you can refer to all the content described in step S3 provided in the foregoing embodiment for calculation, which can realize the calculation of the target from the horizontal initial position parameter and the horizontal end position parameter The horizontal offset of the control, and the vertical offset of the target control are calculated from the vertical initial position parameter and the vertical end position parameter. The specific calculation process is not repeated here.
在确定出目标控件的横向偏移量和纵向偏移量后,即可根据目标控件的初始位置参数确定目标控件调整位置后的终止位置参数。在本实施例中,目标控件的初始位置参数包括横向初始位置参数和纵向初始位置参数,因此,确定出的目标控件的终止位置参数也包括目标控件的横向终止位置参数和纵向终止位置参数。After determining the horizontal offset and the vertical offset of the target control, the final position parameter of the target control after the adjustment of the target control can be determined according to the initial position parameter of the target control. In this embodiment, the initial position parameter of the target control includes a horizontal initial position parameter and a vertical initial position parameter. Therefore, the determined end position parameter of the target control also includes the horizontal end position parameter and the vertical end position parameter of the target control.
具体地,控制器在执行基于目标控件的初始位置参数和目标控件的偏移量,获得目标控件的终止位置参数,进一步被配置为:Specifically, the controller obtains the end position parameter of the target control based on the initial position parameter of the target control and the offset of the target control, and is further configured to:
步骤801、根据目标控件的横向初始位置参数和横向偏移量,计算目标控件的横向终止位置参数。Step 801: Calculate the horizontal end position parameter of the target control according to the horizontal initial position parameter and the horizontal offset of the target control.
步骤802、根据目标控件的纵向初始位置参数和纵向偏移量,计算目标控件的纵向终止位置参数。Step 802: Calculate the longitudinal end position parameter of the target control according to the longitudinal initial position parameter and the longitudinal offset of the target control.
根据目标控件的横向初始位置参数和横向偏移量,即可确定目标控件需要调整位置后的横向终止位置,以及,根据目标控件的纵向初始位置参数和纵向偏移量,即可确定目标控件需要调整位置后的纵向终止位置。根据横向终止位置参数和纵向终止位置参数,实现目标控件的位置调整,使得目标控件能够跟随用户的移动而调整位置。According to the horizontal initial position parameter and horizontal offset of the target control, you can determine the horizontal end position of the target control after adjusting the position, and according to the vertical initial position parameter and vertical offset of the target control, you can determine the target control needs The longitudinal end position after adjusting the position. According to the horizontal termination position parameter and the vertical termination position parameter, the position adjustment of the target control is realized, so that the target control can adjust the position following the movement of the user.
图22中示例性示出了根据实施例中动态调整控件位置时的第二示意图。参见图22,在用户在X轴方向和Y轴方向均发生变化的情况,例如,用户由站立在显示设备的正前方的状态,改变为坐在左后方的椅子上时,需控制目标控件由初始位置向左下角移动。Fig. 22 exemplarily shows a second schematic diagram when the position of the control is dynamically adjusted according to the embodiment. Referring to Figure 22, when the user changes in both the X-axis direction and the Y-axis direction, for example, when the user changes from standing in front of the display device to sitting on a chair behind the left, the target control needs to be controlled by The initial position moves to the lower left corner.
此时,目标控件的横向终止位置参数=横向初始位置参数—横向偏移量,目标控件的纵向终止位置参数=纵向初始位置参数+纵向偏移量。At this time, the horizontal end position parameter of the target control = the horizontal initial position parameter-the horizontal offset, and the vertical end position parameter of the target control = the vertical initial position parameter + the vertical offset.
从上述实施例可以看出,本发明实施例提供的一种显示设备,在用户由初始位置移动至结束位置的过程中,控制器接收摄像头采集的初始位置对应的环境图像数据和结束位置对应的环境图像数据,以获得用户初始位置参数和用户结束位置参数;根据用户初始位置参数和用户结束位置参数计算目标控件的偏移量;基于目标控件的初始位置参数和目标控件的偏移量,获得目标控件的终止位置参数,将目标控件移动至终止位置参数对应的位置。可见,本发明实施例提供的显示设备,可以实现目标控件跟随用户的移动而调整位置,使得用户在位于显示设备的摄像头可视范围内的任一个方位观看目标控件的视角不变,进而保证用户均能够看清控件的显示内容,提高用户主观视觉体验。It can be seen from the foregoing embodiments that, in the display device provided by the embodiment of the present invention, when the user moves from the initial position to the end position, the controller receives the environmental image data corresponding to the initial position collected by the camera and the corresponding end position. Environmental image data to obtain user initial position parameters and user end position parameters; calculate the offset of the target control according to the user initial position parameters and user end position parameters; obtain based on the initial position parameters of the target control and the offset of the target control The end position parameter of the target control, move the target control to the position corresponding to the end position parameter. It can be seen that the display device provided by the embodiment of the present invention can adjust the position of the target control following the movement of the user, so that the user can watch the target control in any direction within the visual range of the camera of the display device, thereby ensuring that the user Both can clearly see the display content of the control and improve the user's subjective visual experience.
图13中示例性示出了根据实施例中动态调整控件的方法的流程图。本申请还提供了一种动态调整控件的方法,由显示设备中的控制器执行,该方法包括以下步骤:FIG. 13 exemplarily shows a flowchart of a method for dynamically adjusting a control according to an embodiment. This application also provides a method for dynamically adjusting controls, which is executed by a controller in a display device, and the method includes the following steps:
S1、在用户由初始位置移动至结束位置的过程中,获取所述目标控件的初始位置参数,以及,接收所述摄像头采集的初始位置对应的环境图像数据和结束位置对应的环境图像数据;S1. During the process of the user moving from the initial position to the end position, obtain the initial position parameter of the target control, and receive the environmental image data corresponding to the initial position and the end position collected by the camera;
S2、获取所述初始位置对应的环境图像数据中携带的用户初始位置参数,以及,结束位置对应的环境图像数据中携带的用户结束位置参数;S2. Acquire the user initial position parameter carried in the environmental image data corresponding to the initial position, and the user end position parameter carried in the environmental image data corresponding to the end position;
S3、基于所述用户初始位置参数和用户结束位置参数,计算所述目标控件的偏移量,所述偏移量用于表征所述目标控件在调整位置时的移动参数;S3. Calculate an offset of the target control based on the user initial position parameter and the user end position parameter, where the offset is used to characterize the movement parameter of the target control when adjusting the position;
S4、基于所述目标控件的初始位置参数和所述目标控件的偏移量,获得所述目标控件的终止位置参数,将所述目标控件移动至所述终止位置参数对应的位置。S4. Obtain the end position parameter of the target control based on the initial position parameter of the target control and the offset of the target control, and move the target control to a position corresponding to the end position parameter.
在本申请一些实施例中,所述基于用户初始位置参数和用户结束位置参数,计算所述目标控件的偏移量,包括:In some embodiments of the present application, the calculating the offset of the target control based on the user initial position parameter and the user end position parameter includes:
所述用户初始位置参数包括用户相对于显示器的初始相对距离和初始位置参数,所述用户结束位置参数包括用户相对于显示器的结束相对距离和结束位置参数,所述 用户对应的位置参数是指人脸框中心点的参数;The user's initial position parameter includes the user's initial relative distance to the display and initial position parameters, the user end position parameter includes the user's end relative distance to the display and end position parameters, and the user's corresponding position parameter refers to a person Parameters of the center point of the face frame;
计算所述用户对应的初始位置参数与目标控件的初始位置参数之间的第一距离,以及,所述用户对应的结束位置参数与目标控件的初始位置参数之间的第二距离,所述目标控件对应的位置参数是指控件中心点的参数;Calculate the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control, and the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control, the target The position parameter corresponding to the control refers to the parameter of the center point of the control;
基于所述初始相对距离、结束相对距离和第一距离,计算所述用户在移动至结束位置时的理论第二距离,所述理论第二距离用于表征用户对应的结束位置与目标控件的终止位置之间的理论距离;Based on the initial relative distance, the end relative distance, and the first distance, calculate the theoretical second distance when the user moves to the end position, and the theoretical second distance is used to characterize the end position corresponding to the user and the termination of the target control The theoretical distance between locations;
计算所述理论第二距离和所述第二距离的距离差,获得所述目标控件的偏移量。The distance difference between the theoretical second distance and the second distance is calculated to obtain the offset of the target control.
在本申请一些实施例中,所述计算用户对应的初始位置参数与目标控件的初始位置参数之间的第一距离,包括:In some embodiments of the present application, the calculating the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control includes:
获取所述用户位于初始位置时的人脸框中心点的像素点数和所述目标控件的控件中心点的像素点数;Obtaining the number of pixels at the center point of the face frame when the user is at the initial position and the number of pixels at the center point of the control of the target control;
计算所述人脸框中心点的像素点数和所述控件中心点的像素点数的像素点数差;Calculating the difference between the number of pixels at the center point of the face frame and the number of pixels at the center point of the control;
根据所述像素点数差和每个像素点的长度值,计算用户对应的初始位置参数与目标控件的初始位置参数之间的第一距离。According to the difference in the number of pixels and the length value of each pixel, the first distance between the initial position parameter corresponding to the user and the initial position parameter of the target control is calculated.
在本申请一些实施例中,所述计算用户对应的结束位置参数与目标控件的初始位置参数之间的第二距离,包括:In some embodiments of the present application, the calculating the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control includes:
获取所述用户位于结束位置时的人脸框中心点的像素点数和所述目标控件的控件中心点的像素点数;Obtaining the number of pixels at the center point of the face frame when the user is at the end position and the number of pixels at the control center point of the target control;
计算所述人脸框中心点的像素点数和所述控件中心点的像素点数的像素点数差;Calculating the difference between the number of pixels at the center point of the face frame and the number of pixels at the center point of the control;
根据所述像素点数差和每个像素点的长度值,计算用户对应的结束位置参数与目标控件的初始位置参数之间的第二距离。According to the difference in the number of pixels and the length value of each pixel, the second distance between the end position parameter corresponding to the user and the initial position parameter of the target control is calculated.
在本申请一些实施例中,所述基于初始相对距离、结束相对距离和第一距离,计算所述用户在移动至结束位置时的理论第二距离,包括:In some embodiments of the present application, the calculation of the theoretical second distance when the user moves to the end position based on the initial relative distance, the end relative distance, and the first distance includes:
按照式S 2'=BN·S 1/AX,计算所述用户在移动至结束位置时的理论第二距离; According to the formula S 2 '=BN·S 1 /AX, calculate the theoretical second distance when the user moves to the end position;
式中,S 2'为理论第二距离,S 1为第一距离,AX为初始相对距离,BN为结束相对距离。 In the formula, S 2 ′ is the theoretical second distance, S 1 is the first distance, AX is the initial relative distance, and BN is the ending relative distance.
在本申请一些实施例中,所述获取目标控件的初始位置参数,包括:In some embodiments of the present application, the obtaining the initial position parameter of the target control includes:
以所述显示界面的左上角作为坐标原点,以所述显示界面由左侧向右侧的方向作为X轴正向,以所述显示界面由上侧向下侧的方向作为Y轴正向,建立参考坐标系;Taking the upper left corner of the display interface as the origin of coordinates, the direction from the left to the right of the display interface as the positive X-axis, and the direction from the upper side to the lower side of the display interface as the positive Y-axis, Establish a reference coordinate system;
获取所述坐标原点的像素点数和所述目标控件的控件中心点的横向像素点数和纵向像素点数;Acquiring the number of pixels of the coordinate origin and the number of horizontal pixels and the number of vertical pixels of the control center point of the target control;
计算所述坐标原点的像素点数和所述控件中心点的横向像素点数的横向像素点数差,以及,所述坐标原点的像素点数和所述控件中心点的纵向像素点数的纵向像素点数差;Calculating the difference in the number of horizontal pixels between the number of pixels at the origin of the coordinates and the number of horizontal pixels at the center of the control, and the difference between the number of pixels at the origin of the coordinates and the number of vertical pixels at the center of the control;
根据所述横向像素点数差、纵向像素点数差和每个像素点的长度值,计算目标控件的控件中心点与坐标原点之间的横向初始距离和纵向初始距离,将所述横向初始距离、纵向初始距离、控件中心点的横向像素点数和纵向像素点数作为目标控件的初始位置参数。According to the difference in the number of horizontal pixels, the difference in the number of vertical pixels, and the length value of each pixel, the horizontal initial distance and the vertical initial distance between the control center point of the target control and the coordinate origin are calculated, and the horizontal initial distance and the vertical initial distance are calculated. The initial distance, the number of horizontal pixels and the number of vertical pixels of the control center point are used as the initial position parameters of the target control.
在本申请一些实施例中,所述基于用户初始位置参数和用户结束位置参数,计算 所述目标控件的偏移量,包括:In some embodiments of the present application, the calculating the offset of the target control based on the user initial position parameter and the user end position parameter includes:
所述用户初始位置参数包括横向初始位置参数和纵向初始位置参数,所述用户结束位置参数包括横向结束位置参数和纵向结束位置参数;The user initial position parameter includes a horizontal initial position parameter and a vertical initial position parameter, and the user end position parameter includes a horizontal end position parameter and a vertical end position parameter;
基于所述横向初始位置参数和横向结束位置参数,计算所述目标控件的横向偏移量;Calculating the lateral offset of the target control based on the lateral initial position parameter and the lateral end position parameter;
基于所述纵向初始位置参数和纵向结束位置参数,计算所述目标控件的纵向偏移量。Based on the longitudinal initial position parameter and the longitudinal end position parameter, the longitudinal offset of the target control is calculated.
在本申请一些实施例中,所述基于目标控件的初始位置参数和所述目标控件的偏移量,获得所述目标控件的终止位置参数,包括:In some embodiments of the present application, the obtaining the end position parameter of the target control based on the initial position parameter of the target control and the offset of the target control includes:
所述目标控件的初始位置参数包括横向初始位置参数和纵向初始位置参数;The initial position parameter of the target control includes a horizontal initial position parameter and a vertical initial position parameter;
根据所述目标控件的横向初始位置参数和横向偏移量,计算所述目标控件的横向终止位置参数;Calculating the lateral end position parameter of the target control according to the lateral initial position parameter and the lateral offset of the target control;
根据所述目标控件的纵向初始位置参数和纵向偏移量,计算所述目标控件的纵向终止位置参数。According to the longitudinal initial position parameter and the longitudinal offset of the target control, the longitudinal end position parameter of the target control is calculated.
在本申请一些实施例中,还包括:In some embodiments of this application, it further includes:
接收所述摄像头采集的环境图像数据;Receiving environmental image data collected by the camera;
识别所述环境图像数据中的人脸数量;Identifying the number of human faces in the environmental image data;
在所述环境图像数据中的人脸数量为1时,执行获取用户初始位置参数和用户结束位置参数的步骤。When the number of human faces in the environmental image data is 1, the step of acquiring the user's initial position parameter and the user's end position parameter is performed.
具体实现中,本发明还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时可包括本发明提供的动态调整控件的方法的各实施例中的部分或全部步骤。所述的存储介质可为磁碟、光盘、只读存储记忆体(英文:read-only memory,简称:ROM)或随机存储记忆体(英文:random access memory,简称:RAM)等。In specific implementation, the present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for dynamically adjusting controls provided by the present invention when the program is executed. . The storage medium may be a magnetic disk, an optical disc, a read-only memory (English: read-only memory, abbreviated as ROM) or a random access memory (English: random access memory, abbreviated as: RAM), etc.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application, not to limit them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application. Scope.
为了方便解释,已经结合具体的实施方式进行了上述说明。但是,上述示例性的讨论不是意图穷尽或者将实施方式限定到上述公开的具体形式。根据上述的教导,可以得到多种修改和变形。上述实施方式的选择和描述是为了更好的解释原理以及实际的应用,从而使得本领域技术人员更好的使用所述实施方式以及适于具体使用考虑的各种不同的变形的实施方式。For the convenience of explanation, the above description has been made in conjunction with specific embodiments. However, the above exemplary discussion is not intended to be exhaustive or to limit the implementation to the specific form disclosed above. According to the above teaching, various modifications and variations can be obtained. The selection and description of the foregoing embodiments are for better explaining the principles and practical applications, so that those skilled in the art can better use the embodiments and various modified embodiments suitable for specific use considerations.

Claims (10)

  1. 一种显示方法,其特征在于,包括:A display method, characterized in that it comprises:
    在显示设备显示全景图片的情况下,识别显示设备前的目标人物,所述目标人物用于表示在显示设备前浏览全景图片的操作者;In the case that the display device displays a panoramic picture, identifying a target person in front of the display device, where the target person is used to indicate an operator who browses the panoramic picture in front of the display device;
    获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度;Acquiring the initial facial angle and the current facial angle of the target person before and after the first preset duration;
    利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离;Obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle;
    按照所述偏移距离,调整所述全景图片在显示设备上的显示内容。Adjust the display content of the panoramic picture on the display device according to the offset distance.
  2. 根据权利要求1所述的方法,其特征在于,识别显示设备前的目标人物的步骤包括:The method according to claim 1, wherein the step of identifying the target person in front of the display device comprises:
    检测显示设备前是否存在预设人物,所述预设人物用于表示显示设备中预先存储的浏览全景图片的操作者;Detecting whether there is a preset person in front of the display device, the preset person being used to represent an operator who browses a panoramic picture pre-stored in the display device;
    如果存在,则确定显示设备前的预设人物为目标人物。If it exists, it is determined that the preset person in front of the display device is the target person.
  3. 根据权利要求2所述的方法,其特征在于,识别显示设备前的目标人物的步骤还包括:The method according to claim 2, wherein the step of identifying the target person in front of the display device further comprises:
    在显示设备前同时存在多个人物并且不存在预设人物的情况下,分别计算出每个人物的人脸对应的像素面积;In the case that there are multiple characters in front of the display device and there are no preset characters, respectively calculate the pixel area corresponding to the face of each character;
    选择像素面积最大的人脸对应的人物作为目标人物。The person corresponding to the face with the largest pixel area is selected as the target person.
  4. 根据权利要求1所述的方法,其特征在于,获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度的步骤包括:The method according to claim 1, wherein the step of obtaining the initial face angle and the current face angle of the target person before and after the first preset duration comprises:
    获取显示设备前目标人物脸部的初始翻滚角度、初始俯仰角度和初始偏航角度;Obtain the initial roll angle, initial pitch angle and initial yaw angle of the target person’s face in front of the display device;
    在第一预设时长过后,获取显示设备前目标人物脸部的当前翻滚角度、当前俯仰角度和当前偏航角度。After the first preset time period has elapsed, the current roll angle, the current pitch angle, and the current yaw angle of the face of the target person in front of the display device are acquired.
  5. 根据权利要求4所述的方法,其特征在于,利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离的步骤包括:The method according to claim 4, wherein the step of obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle comprises:
    利用初始翻滚角度、当前翻滚角度、初始偏航角度和当前偏航角度计算所述全景图片的水平偏移距离;Calculate the horizontal offset distance of the panoramic picture by using the initial roll angle, the current roll angle, the initial yaw angle and the current yaw angle;
    利用初始俯仰角度、当前俯仰角度、初始翻滚角度和当前翻滚角度计算所述全景图片的垂直偏移距离。The vertical offset distance of the panoramic picture is calculated by using the initial pitch angle, the current pitch angle, the initial roll angle and the current roll angle.
  6. 根据权利要求5所述的方法,其特征在于,利用如下公式计算所述全景图片的水平偏移距离:The method according to claim 5, wherein the following formula is used to calculate the horizontal offset distance of the panoramic picture:
    Figure PCTCN2021081562-appb-100001
    Figure PCTCN2021081562-appb-100001
    其中,Offset X表示所述全景图片的水平偏移距离,Roll 1和Roll 2分别表示初始翻滚角度和当前翻滚角度,Yaw 1和Yaw 2分别表示初始偏航角度和当前偏航角度,A1表示矫正系数,delta X表示水平方向最小调整角度值,Time表示最小调整偏移时间。 Among them, Offset X represents the horizontal offset distance of the panoramic picture, Roll 1 and Roll 2 represent the initial roll angle and the current roll angle, Yaw 1 and Yaw 2 represent the initial yaw angle and the current yaw angle, and A1 represents the correction Coefficient, delta X represents the minimum adjustment angle value in the horizontal direction, and Time represents the minimum adjustment offset time.
  7. 根据权利要求5所述的方法,其特征在于,利用如下公式计算所述全景图片的垂直偏移距离:The method according to claim 5, wherein the vertical offset distance of the panoramic picture is calculated using the following formula:
    Figure PCTCN2021081562-appb-100002
    Figure PCTCN2021081562-appb-100002
    其中,Offset Y表示所述全景图片的垂直偏移距离,Roll 1和Roll 2分别表示初始翻滚角度和当前翻滚角度,Pitch 1和Pitch 2分别表示初始俯仰角度和当前俯仰角度,A2表示矫正系数,delta Y表示垂直方向最小调整角度值,Time表示最小调整偏移时间。 Among them, Offset Y represents the vertical offset distance of the panoramic picture, Roll 1 and Roll 2 represent the initial roll angle and the current roll angle, Pitch 1 and Pitch 2 represent the initial pitch angle and the current pitch angle, and A2 represents the correction coefficient. delta Y represents the minimum adjustment angle value in the vertical direction, and Time represents the minimum adjustment offset time.
  8. 根据权利要求1所述的方法,其特征在于,识别显示设备前的目标人物之后,还包括:The method according to claim 1, wherein after identifying the target person in front of the display device, the method further comprises:
    如果在预设时长之后没有识别出显示设备前的目标人物,则重新计算显示设备前当前存在的每个人脸对应的像素面积;If the target person in front of the display device is not recognized after the preset time period, recalculate the pixel area corresponding to each face currently existing in front of the display device;
    选择像素面积最大的人脸对应的人物作为目标人物;Select the person corresponding to the face with the largest pixel area as the target person;
    获取所述目标人物的初始脸部角度;Acquiring the initial face angle of the target person;
    在第二预设时长之后,获取所述目标人物的当前脸部角度。After the second preset duration, the current face angle of the target person is acquired.
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述目标人物的人脸显示在所述显示设备的显示器屏幕上。The method according to any one of claims 1-8, wherein the face of the target person is displayed on the display screen of the display device.
  10. 一种显示设备,其特征在于,包括:A display device, characterized in that it comprises:
    显示器;monitor;
    检测器,用于采集显示设备前的图像;Detector, used to collect the image in front of the display device;
    控制器,被配置为:The controller is configured as:
    在显示设备显示全景图片的情况下,识别显示设备前的目标人物,所述目标人物用于表示在显示设备前浏览全景图片的操作者;In the case that the display device displays a panoramic picture, identifying a target person in front of the display device, where the target person is used to indicate an operator who browses the panoramic picture in front of the display device;
    获取所述目标人物在第一预设时长前后的初始脸部角度和当前脸部角度;Acquiring the initial facial angle and the current facial angle of the target person before and after the first preset duration;
    利用所述初始脸部角度和所述当前脸部角度获得所述全景图片的偏移距离;Obtaining the offset distance of the panoramic picture by using the initial face angle and the current face angle;
    按照所述偏移距离,调整所述全景图片在显示设备上的显示内容。Adjust the display content of the panoramic picture on the display device according to the offset distance.
PCT/CN2021/081562 2020-04-27 2021-03-18 Display method and display device WO2021218473A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010342885.9A CN113645502B (en) 2020-04-27 2020-04-27 Method for dynamically adjusting control and display device
CN202010342885.9 2020-04-27
CN202010559804.0 2020-06-18
CN202010559804.0A CN113825001B (en) 2020-06-18 2020-06-18 Panoramic picture browsing method and display device

Publications (1)

Publication Number Publication Date
WO2021218473A1 true WO2021218473A1 (en) 2021-11-04

Family

ID=78374066

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081562 WO2021218473A1 (en) 2020-04-27 2021-03-18 Display method and display device

Country Status (1)

Country Link
WO (1) WO2021218473A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449162A (en) * 2021-12-22 2022-05-06 天翼云科技有限公司 Method and device for playing panoramic video, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140009503A1 (en) * 2012-07-03 2014-01-09 Tourwrist, Inc. Systems and Methods for Tracking User Postures to Control Display of Panoramas
CN105988578A (en) * 2015-03-04 2016-10-05 华为技术有限公司 Interactive video display method, device and system
CN106383655A (en) * 2016-09-19 2017-02-08 北京小度互娱科技有限公司 Interaction control method for controlling visual angle conversion in panorama playing process, and device for realizing interaction control method
US9596401B2 (en) * 2006-10-02 2017-03-14 Sony Corporation Focusing an image based on a direction of a face of a user
CN106598428A (en) * 2016-11-29 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Method and system for playing panoramic video, and terminal equipment
CN108235132A (en) * 2018-03-13 2018-06-29 哈尔滨市舍科技有限公司 Panoramic video visual angle regulating method and device based on human eye positioning
CN108319362A (en) * 2018-01-02 2018-07-24 联想(北京)有限公司 A kind of panoramic information display methods, electronic equipment and computer storage media

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9596401B2 (en) * 2006-10-02 2017-03-14 Sony Corporation Focusing an image based on a direction of a face of a user
US20140009503A1 (en) * 2012-07-03 2014-01-09 Tourwrist, Inc. Systems and Methods for Tracking User Postures to Control Display of Panoramas
CN105988578A (en) * 2015-03-04 2016-10-05 华为技术有限公司 Interactive video display method, device and system
CN106383655A (en) * 2016-09-19 2017-02-08 北京小度互娱科技有限公司 Interaction control method for controlling visual angle conversion in panorama playing process, and device for realizing interaction control method
CN106598428A (en) * 2016-11-29 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Method and system for playing panoramic video, and terminal equipment
CN108319362A (en) * 2018-01-02 2018-07-24 联想(北京)有限公司 A kind of panoramic information display methods, electronic equipment and computer storage media
CN108235132A (en) * 2018-03-13 2018-06-29 哈尔滨市舍科技有限公司 Panoramic video visual angle regulating method and device based on human eye positioning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449162A (en) * 2021-12-22 2022-05-06 天翼云科技有限公司 Method and device for playing panoramic video, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113330736A (en) Display and image processing method
WO2021179359A1 (en) Display device and display picture rotation adaptation method
WO2020248680A1 (en) Video data processing method and apparatus, and display device
CN111970548B (en) Display device and method for adjusting angle of camera
CN112291599B (en) Display device and method for adjusting angle of camera
WO2021212463A1 (en) Display device and screen projection method
CN112866773B (en) Display equipment and camera tracking method in multi-person scene
CN111899175A (en) Image conversion method and display device
CN112243141A (en) Display method and display equipment for screen projection function
WO2022048203A1 (en) Display method and display device for manipulation prompt information of input method control
WO2022028060A1 (en) Display device and display method
WO2021213097A1 (en) Display apparatus and screen projection method
CN113825002A (en) Display device and focus control method
WO2021218473A1 (en) Display method and display device
WO2021212470A1 (en) Display device and projected-screen image display method
WO2021031598A1 (en) Self-adaptive adjustment method for video chat window position, and display device
CN111078926A (en) Method for determining portrait thumbnail image and display equipment
CN114430492B (en) Display device, mobile terminal and picture synchronous scaling method
WO2021180223A1 (en) Display method and display device
CN112218156B (en) Method for adjusting video dynamic contrast and display equipment
CN112399235B (en) Camera shooting effect enhancement method and display device of intelligent television
CN113824870A (en) Display device and camera angle adjusting method
CN113825001B (en) Panoramic picture browsing method and display device
CN114302203A (en) Image display method and display device
CN114417035A (en) Picture browsing method and display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21796851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21796851

Country of ref document: EP

Kind code of ref document: A1