CN111314739B - Image processing method, server and display device - Google Patents

Image processing method, server and display device Download PDF

Info

Publication number
CN111314739B
CN111314739B CN202010095906.1A CN202010095906A CN111314739B CN 111314739 B CN111314739 B CN 111314739B CN 202010095906 A CN202010095906 A CN 202010095906A CN 111314739 B CN111314739 B CN 111314739B
Authority
CN
China
Prior art keywords
reference position
image block
block set
user
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010095906.1A
Other languages
Chinese (zh)
Other versions
CN111314739A (en
Inventor
任子健
史东平
国廷峰
吴连朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202010095906.1A priority Critical patent/CN111314739B/en
Publication of CN111314739A publication Critical patent/CN111314739A/en
Application granted granted Critical
Publication of CN111314739B publication Critical patent/CN111314739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

In the technical scheme of the panoramic video FOV transmission, a terminal needs to acquire image blocks covered in a current frame view field angle area in real time, and then requests corresponding image block data from a server side or locally, so that panoramic video content is displayed completely. The image blocks covered in each frame of real-time calculation view field angle area can affect the playing effect of the system due to large calculation amount, but the technical scheme provided by the invention can store the mapping relation between the reference position and the image block set corresponding to the reference position as a configuration file in a preset list through preprocessing, when a terminal plays a panoramic video, the image blocks required to be loaded in the current frame are determined by searching the preset list, and then the corresponding data is requested and loaded.

Description

Image processing method, server and display device
Technical Field
The present application relates to the technical field of display devices, and in particular, to an image processing method, a server, and a display device.
Background
Panoramic video is a new multimedia form developed and extended based on 360-degree panoramic images, and is converted into dynamic panoramic video by continuously playing a series of static panoramic images. Panoramic video is generally formed by using a professional panoramic camera to carry out all-dimensional 360-degree shooting and splicing, video images in all directions are spliced by using software, then a special player is used for playing, a plane video is projected into a 360-degree panoramic mode, and the 360-degree panoramic mode is presented to a spatial view field which is fully surrounded by 360 degrees in the horizontal direction and 180 degrees in the vertical direction of an observer. The viewer can interact with the video content in the modes of head movement, eyeball movement, remote controller control and the like, so that the experience of being personally on the scene is obtained. As a new type of heterogeneous multimedia service, a panoramic video service stream contains multiple data types such as audio, video, text, interactive, control signaling, etc., and has diversified qos (quality of service) requirements.
In recent years, in order to reduce the bandwidth requirement of panoramic video transmission, reduce data redundancy, and improve supportable video resolution, an FOV transmission scheme is often adopted in the panoramic video transmission. The FOV transmission scheme is a scheme for differentially transmitting panoramic video pictures based on visual angles, mainly focuses on high-quality transmission of pictures in a current visual angle area, realizes that a panoramic video is generally divided in space, then performs multi-rate coding to generate a plurality of video streams, transmits the video streams of corresponding blocks according to the viewpoint position of a user by a terminal, and finally decodes the video streams, merges the blocks and presents the blocks to the user by the terminal. The FOV transmission scheme has low bandwidth requirement and flexible strategy, and is a direction which is more concerned by academia. The FOV transmission scheme needs to divide a panoramic video into a plurality of blocks, and when a terminal plays the panoramic video, image blocks located in the field angle area of a current frame are loaded and played, but in the conventional method, the image blocks covered in the field angle area need to be calculated in real time in each frame, and due to the fact that the calculation amount is large, real-time calculation can cause negative effects on system performance, so that the final playing effect is further influenced, and the experience is reduced.
Disclosure of Invention
The application aims to provide an image processing method, a server and a display device.
A first aspect of the embodiments of the present application shows an image processing method, where the method is applied to a server side, and includes:
receiving a viewpoint position uploaded by display equipment;
screening out a target view image block set matched with the viewpoint position in a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position;
and sending the image blocks covered in the target image set to a display device.
A second aspect of the embodiments of the present application shows an image processing method, which is applied to a display device side, and includes: receiving a viewpoint position;
screening out a target view image block set matched with the viewpoint position in a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; the preset list is generated by a server, and the display equipment requests from the server and then stores the preset list locally;
and loading the image blocks covered in the target image set.
A third aspect of embodiments of the present application shows a server, including:
a receiving unit configured to receive a viewpoint position uploaded by a display device;
a screening unit configured to screen out a target view image block set matched with the viewpoint position in a preset list, where the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores image block sets corresponding to the reference position and the reference position;
a sending unit configured to send the image blocks covered in the target image set to a display device.
A fourth aspect of the embodiments of the present application shows a display device, including:
a receiving unit configured to receive a viewpoint position;
a screening unit configured to screen out a target view image block set matched with the viewpoint position in a preset list, where the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores image block sets corresponding to the reference position and the reference position; the preset list is generated by a server, and the display equipment requests from the server and then stores the preset list locally;
a loading unit configured to load image blocks covered in the target image set.
The embodiment of the application discloses an image processing method, a server and display equipment, wherein the method comprises the following steps: receiving a viewpoint position uploaded by display equipment; screening out a target view image block set matched with the viewpoint position in a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; and sending the image blocks covered in the target image set to a display device. In the technical scheme of the panoramic video FOV transmission, a terminal needs to acquire information of image blocks covered in a current frame view angle area in real time and then requests corresponding image block data from a server side or locally, so that the panoramic video content is displayed completely. The image blocks covered in each frame of real-time calculation view field angle area can affect the playing effect of the system due to large calculation amount, but the technical scheme provided by the invention can store the mapping relation between the reference position and the image block set corresponding to the reference position as a configuration file in a preset list through preprocessing, when a terminal plays a panoramic video, the image blocks required to be loaded in the current frame are determined by searching the preset list, and then the corresponding data is requested and loaded.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an operation scenario between a display device and a control apparatus according to an embodiment of the present application;
fig. 2 is a block diagram illustrating a hardware configuration of the control apparatus 100 in fig. 1 according to an embodiment of the present disclosure;
fig. 3 is a block diagram illustrating a hardware configuration of the display device 200 in fig. 1 according to an embodiment of the present disclosure;
fig. 4 is a block diagram illustrating an architecture configuration of an operating system in a memory of the display device 200 according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating panoramic image cutting in accordance with a preferred embodiment;
FIG. 7 is a schematic diagram illustrating reference location point sampling in accordance with a preferred embodiment;
FIG. 8 is a schematic diagram illustrating reference location point sampling in accordance with a preferred embodiment;
FIG. 9 is a schematic diagram of a panoramic video user three-degree-of-freedom interaction;
FIG. 10 is a schematic view of a viewing angle area corresponding to a viewing point location in accordance with a preferred embodiment;
FIG. 11 is a diagram illustrating image block partitioning in accordance with a preferred embodiment;
FIG. 12 is a diagram illustrating the distance of a viewpoint location point from the reference location point i in accordance with a preferred embodiment;
fig. 13 is a diagram illustrating a division method of a view point region according to a preferred embodiment;
fig. 14 is a schematic view of a viewing angle area corresponding to the viewing point area 1 according to a preferred embodiment;
fig. 15 is a schematic view of a viewing angle region corresponding to the viewing point region 1 shown in fig. 14;
fig. 16 is a schematic diagram of a viewpoint area 1 shown according to a preferred embodiment;
FIG. 17 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 18 is a block diagram illustrating the architecture of a server in accordance with a preferred embodiment;
fig. 19 is a block diagram showing the construction of a display device according to a preferred embodiment.
Detailed Description
The drawings in the embodiments of the present invention are collected below to clearly and completely describe the technical solutions in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus. As shown in fig. 1, the control apparatus 100 and the display device 200 may communicate with each other in a wired or wireless manner.
Among them, the control apparatus 100 is configured to control the display device 200, which may receive an operation instruction input by a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an intermediary for interaction between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, etc. to control the display device 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control apparatus 100 may also be an intelligent device, such as a mobile display device 100B, a tablet computer, a notebook computer, and the like. For example, the display device 200 is controlled using an application program running on the smart device. The application program can provide various controls for a User through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
For example, the mobile display device 100B may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile display apparatus 100B may be caused to establish a control instruction protocol with the display apparatus 200 to implement the function of the physical keys as arranged in the remote control 100A by operating various function keys or virtual buttons of the user interface provided on the mobile display apparatus 100B. The audio and video content displayed on the mobile display device 100B may also be transmitted to the display device 200 to implement a synchronous display function.
The display apparatus 200 may provide a network television function of a broadcast receiving function and a computer support function. The display device may be implemented as a digital television, a web television, an Internet Protocol Television (IPTV), or the like.
The display device 200 may be a liquid crystal display, an organic light emitting display, a projection device. The specific display device type, size, resolution, etc. are not limited.
The display apparatus 200 also performs data communication with the server 300 through various communication means. Here, the display apparatus 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. By way of example, the display device 200 may send and receive information such as: receiving Electronic Program Guide (EPG) data, receiving software Program updates, or accessing a remotely stored digital media library. The servers 300 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
Fig. 2 is a block diagram illustrating the configuration of the control device 100. As shown in fig. 2, the control device 100 includes a controller 110, a memory 120, a communicator 130, a user input interface 140, an output interface 150, and a power supply 160.
The controller 110 includes a RAM (Random Access Memory) 111, a ROM (Read-Only Memory) 112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components of the communication cooperation, external and internal data processing functions.
Illustratively, when an interaction of a user pressing a key disposed on the remote controller 100A or an interaction of touching a touch panel disposed on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the display device 200.
And a memory 120 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 120 may store various control signal commands input by a user.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the control apparatus 100 transmits a control signal (e.g., a touch signal or a button signal) to the display device 200 via the communicator 130, and the control apparatus 100 may receive the signal transmitted by the display device 200 via the communicator 130. The communicator 130 may include an infrared module 131 (infrared signal interface), a radio frequency signal interface 132, and a bluetooth module 133. For example: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
The user input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like, so that a user can input a user instruction regarding controlling the display apparatus 200 to the control apparatus 100 through voice, touch, gesture, press, and the like.
The output interface 150 outputs a user instruction received by the user input interface 140 to the display apparatus 200, or outputs an image or voice signal received by the display apparatus 200. Here, the output interface 150 may include an LED interface 151, a vibration interface 152 generating vibration, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like. For example, the remote controller 100A may receive an output signal such as audio, video, or data from the output interface 150, and display the output signal in the form of an image on the display 154, in the form of audio on the sound output interface 153, or in the form of vibration on the vibration interface 152.
And a power supply 160 for providing operation power support for each element of the control device 100 under the control of the controller 110. In the form of a battery and associated control circuitry.
A hardware configuration block diagram of the display device 200 is exemplarily shown in fig. 3. As shown in fig. 3, the display apparatus 200 may include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, an audio processor 280, an audio output interface 285, and a power supply 290.
The tuner demodulator 210 receives the broadcast television signal in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, and resonance, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., EPG data).
The tuner demodulator 210 is responsive to the user selected frequency of the television channel and the television signal carried by the frequency, as selected by the user and controlled by the controller 250.
The tuner demodulator 210 can receive a television signal in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to the different kinds of the received television signals.
In other exemplary embodiments, the tuning demodulator 210 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal into the display apparatus 200 through the external device interface 240.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the display apparatus 200 may transmit content data to an external apparatus connected via the communicator 220, or browse and download content data from an external apparatus connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth module 222, and a wired ethernet module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the display apparatus 200 for collecting signals of an external environment or interaction with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive a user's sound, such as a voice signal of a control instruction of the user to control the display device 200; alternatively, ambient sounds may be collected that identify the type of ambient scene, enabling the display device 200 to adapt to ambient noise.
In some other exemplary embodiments, the detector 230, which may further include an image collector 232, such as a camera, a video camera, etc., may be configured to collect external environment scenes to adaptively change the display parameters of the display device 200; and the function of acquiring the attribute of the user or interacting gestures with the user so as to realize the interaction between the display equipment and the user.
In some other exemplary embodiments, the detector 230 may further include a light receiver (not shown) for collecting the intensity of the ambient light to adapt to the display parameter variation of the display device 200.
In some other exemplary embodiments, the detector 230 may further include a temperature sensor (not shown), such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display apparatus 200 may be adjusted to display a color temperature of an image that is cooler; when the temperature is lower, the display device 200 may be adjusted to display a warmer color temperature of the image.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the display apparatus 200 and an external apparatus. The external device interface 240 may be connected to an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 240 may include: one or more of an HDMI (High Definition Multimedia Interface) terminal 241, a CVBS (Composite Video Blanking and Sync) terminal 242, a Component (analog or digital) terminal 243, a USB (Universal Serial Bus) terminal 244, a Component (Component) terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 260.
As shown in fig. 3, the controller 250 includes a RAM (random access memory) 251, a ROM (read only memory) 252, an image processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. Among them, the RAM251, the ROM252, the image processor 253, the CPU processor 254, and the communication interface 255 are connected by a communication bus 256.
The ROM252 stores various system boot instructions. When the display apparatus 200 starts power-on upon receiving the power-on signal, the CPU processor 254 executes a system boot instruction in the ROM252, copies the operating system stored in the memory 260 to the RAM251, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 254 copies the various application programs in the memory 260 to the RAM251 and then starts running and starting the various application programs.
An image processor 253 for generating various graphic objects such as icons, operation menus, and user input instruction display graphics, etc. The image processor 253 may include an operator for performing an operation by receiving various interactive instructions input by a user, thereby displaying various objects according to display attributes; and a renderer for generating various objects based on the operator and displaying the rendered result on the display 275.
A CPU processor 254 for executing operating system and application program instructions stored in memory 260. And according to the received user input instruction, processing of various application programs, data and contents is executed so as to finally display and play various audio-video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the display apparatus 200 in the display apparatus preloading mode, and/or operations of displaying a screen in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface, a second interface, and an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a User input command for selecting a GUI (Graphical User Interface) object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the User input command.
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink page, document, image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
A memory 260 for storing various types of data, software programs, or applications for driving and controlling the operation of the display device 200. The memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes the memory 260, the RAM251 and the ROM252 of the controller 250, or a memory card in the display device 200.
In some embodiments, the memory 260 is specifically used for storing an operating program for driving the controller 250 of the display device 200; storing various application programs built in the display apparatus 200 and downloaded by a user from an external apparatus; data such as visual effect images for configuring various GUIs provided by the display 275, various objects related to the GUIs, and selectors for selecting GUI objects are stored.
In some embodiments, memory 260 is specifically configured to store drivers for tuner demodulator 210, communicator 220, detector 230, external device interface 240, video processor 270, display 275, audio processor 280, etc., and related data, such as external data (e.g., audio-visual data) received from the external device interface or user data (e.g., key information, voice information, touch information, etc.) received by the user interface.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, an Application Programming Interface (API), and/or an Application program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., the middleware, APIs, or applications); at the same time, the kernel may provide an interface to allow middleware, APIs, or applications to access the controller to enable control or management of system resources.
A block diagram of the architectural configuration of the operating system in the memory of the display device 200 is illustrated in fig. 4. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application programs built in the system and the non-system-level application programs belong to the application layer. Is responsible for direct interaction with the user. The application layer may include a plurality of applications such as a setup application, a post application, a media center application, and the like. These applications may be implemented as Web applications that execute based on a WebKit engine, and in particular may be developed and executed based on HTML5, Cascading Style Sheets (CSS), and JavaScript.
Here, HTML, which is called HyperText Markup Language (HyperText Markup Language), is a standard Markup Language for creating web pages, and describes the web pages by Markup tags, where the HTML tags are used to describe characters, graphics, animation, sound, tables, links, etc., and a browser reads an HTML document, interprets the content of the tags in the document, and displays the content in the form of web pages.
CSS, known as Cascading Style Sheets (HTML) is a computer language used to represent the Style of HTML documents, and can be used to define the Style structure, such as the language of fonts, colors, positions, etc. The CSS style can be directly stored in the HTML webpage or a separate style file, so that the style in the webpage can be controlled.
JavaScript, a language applied to Web page programming, can be inserted into an HTML page and interpreted and executed by a browser. The interaction logic of the Web application is realized by JavaScript. The JavaScript can package a JavaScript extension interface through the browser to realize communication with the kernel layer.
The middleware layer may provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may be implemented as Multimedia and Hypermedia Experts Group (MHEG) middleware related to data broadcasting, DLNA (Digital Living Network Alliance) middleware of middleware related to communication with an external device, middleware providing a browser environment in which each application program in the display device operates, and the like.
The kernel layer provides core system services, such as: file management, memory management, process management, network management, system security authority management and the like. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on the Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: the display driver is provided for the display, the camera driver is provided for the camera, the key driver is provided for the remote controller, the WIFI driver is provided for the WIFI module, the audio driver is provided for the audio output interface, the Power Management driver is provided for the Power Management (PM) module, and the like.
A user interface 265 receives various user interactions. Specifically, it is used to transmit an input signal of a user to the controller 250 or transmit an output signal from the controller 250 to the user. For example, the remote controller 100A may transmit an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by the user to the user interface 265, and then the input signal is transferred to the controller 250 through the user interface 265; alternatively, the remote controller 100A may receive an output signal such as audio, video, or data output from the user interface 265 via the controller 250, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on the display 275, and the user interface 265 receives the user input commands through the GUI. Specifically, the user interface 265 may receive user input commands for controlling the position of a selector in the GUI to select different objects or items.
Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user interface 265 receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 270 is configured to receive an external video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 275.
Illustratively, the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is configured to demultiplex an input audio/video data stream, where, for example, an input MPEG-2 stream (based on a compression standard of a digital storage media moving image and voice), the demultiplexing module demultiplexes the input audio/video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, for example, convert a frame rate of an input 60Hz video into a frame rate of 120Hz or 240Hz, where a common format is implemented by using, for example, an interpolation frame method.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output RGB data signals.
A display 275 for receiving the image signal from the video processor 270 and displaying the video content, the image and the menu manipulation interface. The display video content may be from the video content in the broadcast signal received by the tuner-demodulator 210, or from the video content input by the communicator 220 or the external device interface 240. The display 275, while displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, the display 275 may include a display screen assembly for presenting a picture and a driving assembly for driving the display of an image. Alternatively, a projection device and projection screen may be included, provided display 275 is a projection display.
The audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing to obtain an audio signal that can be played by the speaker 286.
Illustratively, audio processor 280 may support various audio formats. Such as MPEG-2, MPEG-4, Advanced Audio Coding (AAC), high efficiency AAC (HE-AAC), and the like.
The audio output interface 285 is used for receiving an audio signal output by the audio processor 280 under the control of the controller 250, and the audio output interface 285 may include a speaker 286 or an external sound output terminal 287, such as an earphone output terminal, for outputting to a generating device of an external device.
In other exemplary embodiments, video processor 270 may comprise one or more chips. Audio processor 280 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 270 and the audio processor 280 may be separate chips or may be integrated with the controller 250 in one or more chips.
And a power supply 290 for supplying power supply support to the display apparatus 200 from the power input from the external power source under the control of the controller 250. The power supply 290 may be a built-in power supply circuit installed inside the display apparatus 200 or may be a power supply installed outside the display apparatus 200.
Based on the technical problems in the prior art, a first aspect of the embodiments of the present application shows an image processing method, which is applied to a server side, and specifically refer to fig. 5, where the method includes the following steps:
s101, receiving a viewpoint position uploaded by display equipment;
in the technical scheme shown in the embodiment of the application, the display device firstly acquires the viewpoint position of a user and uploads the acquired viewpoint position to the server;
the manner of acquiring the viewpoint position by the display device may adopt a viewpoint position acquiring method commonly used in the art, for example: a motion capture mode, an eyeball tracking mode, a myoelectric simulation mode, a gesture tracking mode, a direction tracking mode, a voice interaction mode, a sensor mode and the like. The applicant does not limit the manner of acquiring the viewpoint position, and all the methods that can acquire the viewpoint position in the actual application process can be applied to the technical solutions shown in the embodiments of the present application.
S102, screening out a target view image block set matched with the viewpoint position from a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position;
according to the technical scheme provided by the embodiment of the application, the mapping relation between the reference position and the image block set corresponding to the reference position can be stored in the preset list as a configuration file through preprocessing, when the terminal plays the panoramic video, the image block required to be loaded in the current frame is determined by searching the preset list, and then the corresponding data is requested and loaded.
The preset list generation process is explained in detail as follows:
firstly, selecting n reference positions in a panoramic image;
then, sequentially determining a field angle area i _ a corresponding to the reference position i according to the reference position i and a user visible field angle area, wherein i is 1-n;
then, determining a set of image blocks covered by the field angle area i _ a as an image block set i _ r, wherein the image block set i _ r is an image block set corresponding to the reference position i;
and finally, storing the reference position i and the image block set i _ r in a preset list.
In the application process, the server screens out a target view image block set matched with the viewpoint position in a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position.
The technical scheme shown in the embodiment of the application adopts two different ways to determine the field angle area i _ a corresponding to the reference position i:
in the first way of determining the field angle area i _ a corresponding to the reference position i, the reference position is a reference position point: the specific determination process is as follows:
step 1 a-1: and a region formed by rotating the diagonal line of the user visible rectangular region by 360 degrees around the reference position point i as a circle center is the field angle region i _ a corresponding to the reference position i, wherein the user visible rectangular region is a user visible field angle region.
The technical scheme shown in the embodiment of the application cuts the panoramic video into a plurality of image blocks in advance, each image block is configured with an identification value, and in a feasible embodiment, the coordinates of the image block can be used as the identification value of the fast point of the image.
Fig. 6 is a schematic diagram illustrating the cutting of the panoramic image according to a preferred embodiment, and as can be seen from the diagram, the panoramic image is cut into 32 image blocks in advance, and each image block corresponds to an identification value (the identification value may be a coordinate value).
For example: the identification value of the first image block in the first row is (1, 1); the first image block of the first row may be referred to as image block 11 in this application;
the identification value of the second image block in the first row is (1, 2); the second image block of the first row may be referred to as image block 12 in this application;
the identification value of the third image block in the first row is (1, 3); the third image block of the first row may be referred to as image block 13 in this application;
the identification value of the fourth image block in the first row is (1, 4); the fourth image block of the first row may be referred to as image block 14 in this application;
the identification value of the fifth image block in the first row is (1, 5); the fifth image block of the first row may be referred to as image block 15 in this application;
the identification value of the sixth image block in the first row is (1, 6); the sixth image block of the first row may be referred to herein as image block 16;
the identification value of the seventh image block in the first row is (1, 7); the seventh image block of the first row may be referred to as image block 17 in this application;
the identification value of the eighth image block of the first row is (1, 8); the eighth image block of the first row may be referred to as image block 18 in this application.
In the technical scheme shown in the embodiment of the application, n reference position points are selected from the panoramic image, the specific sampling interval is not limited, and can be determined according to the actual situation when the method is implemented, the smaller the sampling interval is, the larger the preprocessing calculation amount is, the longer the preprocessing time is, but the more accurate the viewpoint mapping and the block scheduling are.
FIG. 7 is a schematic diagram illustrating reference location point sampling in accordance with a preferred embodiment. It can be seen that 3 × 7 reference position points are taken from the panoramic image, and a reference position point set { T _1, T _2, T _3, T _4, T _5, T _6, T _7, T _8, T _9, T _10, T _11, T _12, T _13, T _14, T _15, T _16, T _17, T _18, T _19, T _20, T _21} is obtained. The sampling method shown in fig. 7 requires processing of 21 reference position points.
FIG. 8 is a schematic diagram illustrating reference location point sampling in accordance with a preferred embodiment. It can be seen that 2 × 4 reference position points are taken from the panoramic image, and 8 sampling position points need to be processed by adopting the sampling mode shown in fig. 8 to obtain the reference position point set { T _1, T _2, T _3, T _4, T _5, T _6, T _7, and T _8 }.
As shown in fig. 9, there are three degrees of freedom for the user interaction during panoramic video viewing, namely rotation around the X-axis, the Y-axis and the Z-axis, wherein the rotation around the X-axis and the Y-axis represents the change of the user viewpoint and the user visual angle area in the latitudinal direction and the longitudinal direction, and the rotation around the Z-axis represents the change of the user viewpoint position and the user visual angle area. Therefore, for the current frame, the user-viewable viewing angle region is actually a rectangular region determined by the viewing point position, the rotation angle around the Z-axis, and the user viewing angle size.
As shown in fig. 10, at a certain fixed position point, after the user visible view angle area continuously rotates around the Z axis, the range covered by the user visible view angle area is a circular area, and assuming that the size of the user visible view angle area is (L, H), the diameter of the formed circular area is (L, H)
Figure BDA0002385256140000101
When the viewpoint mapping preprocessing is performed, if the rotation amount of the user visible visual angle area around the Z axis is also used as a variable to be calculated and recorded, the number and complexity of data entries recorded by the configuration file are greatly increased, and the terminal is used for the terminal to perform the processingThe amount of computation required to load video blocks for real-time computation also increases, which can affect system performance. Therefore, in the patent, a region formed by rotating a diagonal line of the rectangular region visible to the user by 360 degrees around the reference position point i is the field angle region i _ a corresponding to the reference position i.
FIG. 11 is a diagram illustrating image block partitioning in accordance with a preferred embodiment. As can be seen from the figure, the panoramic image is cut into 4 × 8 image blocks, wherein the identification value of each image block is the coordinate value of the image block in the panoramic image. The method comprises the steps of selecting 3-7 reference position points in advance, namely reference position points T _1, T _2, T _3, T _4, T _5, T _6, T _7, T _8, T _9, T _10, T _11, T _12, T _13, T _14, T _15, T _16, T _17, T _18, T _19, T _20 and T _ 21. In the figure, the area a is the viewing angle area visible to the user, and the area B is the viewing angle area corresponding to the reference position point T _ 9. It can be seen that the set of image blocks 9_ r corresponding to the reference position point T _9 is { image block 11, image block 12, image block 13, image block 14, image block 21, image block 22, image block 23, image block 24, image block 31, image block 32, image block 33, image block 34, image block 41, image block 42, image block 43, image block 44 }.
Each reference position point i shown in fig. 11 and the image block set i _ r corresponding to the reference position point i are stored in a preset list, and the obtained preset list may refer to table 1.
TABLE 1
Figure BDA0002385256140000111
Figure BDA0002385256140000121
In the above embodiment, the step of filtering out the target view block set matching the viewpoint position in the preset list in step S102 includes:
step 2 a-1: traversing the preset list, and respectively calculating the distance between the viewpoint position and the reference position point i;
step 2 a-2: and selecting the reference position point i with the shortest distance as a target reference position, wherein the image block set corresponding to the target reference position is the target image block set.
Fig. 12 is a schematic diagram illustrating the distances between the viewpoint location points and the reference location point i according to a preferred embodiment, and it can be seen that the reference location points adjacent to the viewpoint a are T _1, T _2, T _8 and T _9, respectively, wherein the distance from the viewpoint a to T _1 is l 1; the distance from the viewpoint a to T _2 is l 2; the distance from viewpoint a to T _9 is l 3; the distance from viewpoint a to T _8 is l 4. Wherein l3< l4< l2< l1, the server determines 13 corresponding T _9 as the target reference position.
The server traverses the preset list (table 1) and calls the set of tiles 9_ r corresponding to T _9 in table 1 as { tile 11, tile 12, tile 13, tile 14, tile 21, tile 22, tile 23, tile 24, tile 31, tile 32, tile 33, tile 34, tile 41, tile 42, tile 43, tile 44 }.
In a second method for determining a view angle area i _ a corresponding to the reference position i, the reference position is a view angle area: the specific determination process is as follows:
step 1 b: respectively reading a longitude and latitude range i _ oa corresponding to the projection of each viewpoint area i on the spherical surface, wherein the longitude and latitude range i _ oa comprises: longitude range (lon1, lon2) and latitude range (lat1, lat 2);
and 2b, determining that a region surrounded by the latitude and longitude ranges (lon1-D/2, lon2+ D/2) and (lat1-D/2, lat2+ D/2) is the field angle region i _ a corresponding to the viewpoint region i, wherein D is a diagonal line of the user visible rectangular region, and the user visible rectangular region is the user visible field angle region.
Specifically, fig. 13 is a schematic diagram illustrating a method for dividing view point areas according to a preferred embodiment, and it can be seen that 16 view point areas are divided in a panorama image.
Fig. 14 is a schematic view of the viewing angle area 6_ a corresponding to the viewing point area 6 in fig. 13. In the figure, a rectangular area corresponding to a solid line is a viewpoint area 6, and a circular area is a circular area having a diameter equal to a diagonal line of the display device. It can be seen that the viewpoint area projects in a corresponding longitude range (lon1, lon2) and latitude range (lat1, lat2) on the sphere;
determining a region surrounded by the latitude and longitude ranges (lon1-D/2, lon2+ D/2) and (lat1-D/2, lat2+ D/2) as a field angle region 6_ a (corresponding to a rectangular region corresponding to a dotted line in FIG. 14) corresponding to the viewpoint region 6, wherein D is a diagonal line of the user visible rectangular region, and the user visible rectangular region is the user visible viewing angle region.
The area C in fig. 15 is a viewing angle area 6_ a corresponding to the viewing point area 6 in the technical solution shown in fig. 14. Referring to fig. 15, the image block set covered by the area C includes { image block 12, image block 13, image block 14, image block 15, image block 22, image block 23, image block 24, image block 25, image block 32, image block 33, image block 34, image block 35 } which is the image block set 6_ r.
In the above embodiment, the step of filtering out the target view block set matching the viewpoint position in the preset list in step S102 includes:
step 2 b-1: traversing the preset list, and respectively reading the longitude and latitude range i _ oa of each viewpoint area i;
step 2 b-2: in response to that the longitude and latitude value corresponding to the viewpoint position falls within the longitude and latitude range i _ oa, determining a viewpoint area i corresponding to the longitude and latitude range i _ oa as a target viewpoint area; and the image block set corresponding to the target viewpoint area is the target image block set.
Fig. 16 is a schematic diagram of a viewpoint area 1 according to a preferred embodiment, and it can be seen that a viewpoint 1, a viewpoint 2, a viewpoint 3 and a viewpoint 4 all fall within the range of the viewpoint area 1. Therefore, no matter what the real devices upload, are viewpoint 1, viewpoint 2, viewpoint 3, and viewpoint 4. The server calls the tiles corresponding to the view point region 1, i.e., { tile 11, tile 12, tile 13, tile 21, tile 22, tile 23 }.
S103, sending the image blocks covered in the target image set to a display device.
To sum up, the embodiment of the present application shows that the method is applied to a server, the technical scheme provided by the present invention can store the mapping relationship between the reference position and the image block set corresponding to the reference position as a configuration file in a preset list through preprocessing, when a terminal plays a panoramic video, the image block required to be loaded in a current frame is determined by searching the preset list, and then a request and loading of corresponding data are performed.
A second aspect of the embodiment of the present application shows an image processing method, specifically, with reference to fig. 17, where the method is applied to a display device, and includes:
s201 receives a viewpoint position;
the manner of acquiring the viewpoint position by the display device may adopt a viewpoint position acquiring method commonly used in the art, for example: a motion capture mode, an eyeball tracking mode, a myoelectric simulation mode, a gesture tracking mode, a direction tracking mode, a voice interaction mode, a sensor mode and the like. The applicant does not limit the manner of acquiring the viewpoint position, and in the process of practical application, the manner may be implemented to acquire the viewpoint position, but may be applied to the technical solutions shown in the embodiments of the present application.
S202, screening out a target view image block set matched with the viewpoint position from a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; the preset list is generated by a server, and the display equipment requests from the server and then stores the preset list locally;
the method for generating the preset list may refer to the above embodiments, and the applicant is not described in detail again.
S203, loading the image blocks covered in the target image set.
According to the technical scheme provided by the invention, the mapping relation between the reference position and the image block set corresponding to the reference position can be stored in the preset list as a configuration file through preprocessing, when the terminal plays the panoramic video, the image block required to be loaded in the current frame is determined by searching the preset list, and then the corresponding data is requested and loaded.
A third aspect of the embodiment of the present application shows a server, specifically, with reference to fig. 18, where the server includes:
a receiving unit 11 configured to receive the viewpoint position uploaded by the display device;
a screening unit 12 configured to screen out a target view image block set matching the viewpoint position from a preset list, where the target image block set is an image block set corresponding to a reference position matching the viewpoint position, and the preset list stores image block sets corresponding to the reference positions and the reference positions;
a sending unit 13 configured to send the image blocks covered in the target image set to a display device.
A second aspect of the embodiments of the present application shows a display device, specifically, with reference to fig. 19, the display device includes:
a receiving unit 21 configured to receive a viewpoint position;
a screening unit 22 configured to screen out a target view image block set matching the viewpoint position, where the target image block set is an image block set corresponding to a reference position matching the viewpoint position, in a preset list storing reference positions and image block sets corresponding to the reference positions; the preset list is generated by a server, and the display equipment requests from the server and then stores the preset list locally;
a loading unit 23 configured to load the image blocks covered in the target image set.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. The specification and examples are to be regarded in an illustrative manner only and are not intended to limit the scope of the present invention. With a true scope and spirit of the invention being indicated by the following claims.

Claims (8)

1. An image processing method is applied to a server side and comprises the following steps:
receiving a viewpoint position uploaded by display equipment;
selecting a reference position with the shortest distance to the viewpoint position;
screening out a target image block set matched with the reference position in a preset list, wherein the target image block set is an image block set corresponding to the reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; the image block set corresponding to the reference position comprises video blocks covered by an area formed by rotating a diagonal line of a rectangular area visible to a user by 360 degrees by taking the reference position point as a circle center in the panoramic video; the user visible rectangular area is determined by the viewpoint position, the rotation angle around the Z axis and the field angle of the user;
and sending the image blocks covered in the target image set to a display device.
2. The method of claim 1, wherein the preset list is generated by:
selecting n reference positions in the panoramic image;
sequentially determining a field angle area i _ a corresponding to a reference position i according to the reference position i and a user visible field angle area, wherein i = 1-n;
determining a set of image blocks covered by the field angle area i _ a as an image block set i _ r, wherein the image block set i _ r is an image block set corresponding to the reference position i;
and storing the reference position i and the image block set i _ r in a preset list.
3. The method according to claim 2, wherein the reference position is a reference position point, and the step of determining, according to the reference position i and a viewing angle area visible to a user, a viewing angle area i _ a corresponding to the reference position i specifically comprises:
and rotating a diagonal line of the rectangular region visible to the user by 360 degrees around the reference position point i as a circle center to form a region, namely the field angle region i _ a corresponding to the reference position i, wherein the rectangular region visible to the user is the user visible field angle region.
4. An image processing method, applied to a server side, includes:
selecting n reference positions in the panoramic image; the reference position is a viewpoint area, and a longitude and latitude range i _ oa corresponding to the projection of each viewpoint area i on the spherical surface is read respectively, wherein the longitude and latitude range i _ oa includes: longitude range (lon1, lon2) and latitude range (Lon1, Lon2)
Figure DEST_PATH_IMAGE001
,
Figure DEST_PATH_IMAGE002
);
Determining latitude and longitude ranges (
Figure DEST_PATH_IMAGE003
,
Figure DEST_PATH_IMAGE004
) And (a)
Figure DEST_PATH_IMAGE005
,
Figure DEST_PATH_IMAGE006
) The surrounded area is a view angle area i _ a corresponding to the view point area i, wherein D is a diagonal line of a user visible rectangular area, and the user visible rectangular area is a user visible view angle area;
determining a set of image blocks covered by the view field angle area i _ a as an image block set i _ r, wherein the image block set i _ r is an image block set corresponding to the view field area i;
storing the view area i and the image block set i _ r in a preset list;
receiving a viewpoint position uploaded by display equipment;
screening out a target image block set matched with the viewpoint position from the preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position;
and sending the image blocks covered in the target image set to a display device.
5. The method of claim 4, wherein the step of filtering out the target view block set matching the viewpoint position in a preset list comprises:
traversing the preset list, and respectively reading the longitude and latitude range i _ oa of each viewpoint area i;
in response to that the longitude and latitude value corresponding to the viewpoint position falls within the longitude and latitude range i _ oa, determining a viewpoint area i corresponding to the longitude and latitude range i _ oa as a target viewpoint area; and the image block set corresponding to the target viewpoint area is the target image block set.
6. An image processing method is applied to a display device side and comprises the following steps:
receiving a viewpoint position;
selecting a reference position with the shortest distance to the viewpoint position;
screening out a target image block set matched with the reference position in a preset list, wherein the target image block set is an image block set corresponding to the reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; the image block set corresponding to the reference position comprises video blocks covered by an area formed by rotating a diagonal line of a rectangular area visible to a user by 360 degrees by taking the reference position point as a circle center in the panoramic video; the user visible rectangular area is determined by the viewpoint position, the rotation angle around the Z axis and the field angle of the user; the preset list is generated by a server, and the display equipment requests from the server and then stores the preset list locally;
and loading the image blocks covered in the target image set.
7. A server, comprising:
a receiving unit configured to receive a viewpoint position uploaded by a display device;
a screening unit configured to select a reference position having the shortest distance to the viewpoint position; screening out a target image block set matched with the reference position from a preset list, wherein the target image block set is an image block set corresponding to the reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; the image block set corresponding to the reference position comprises video blocks covered by an area formed by rotating a diagonal line of a rectangular area visible to a user by 360 degrees by taking the reference position point as a circle center in the panoramic video; the user visible rectangular area is determined by the viewpoint position, the rotation angle around the Z axis and the field angle of the user;
a sending unit configured to send the image blocks covered in the target image set to a display device.
8. A display device, comprising:
a receiving unit configured to receive a viewpoint position;
a screening unit configured to select a reference position having the shortest distance to the viewpoint position; screening out a target image block set matched with the viewpoint position from a preset list, wherein the target image block set is an image block set corresponding to a reference position matched with the viewpoint position, and the preset list stores the reference position and the image block set corresponding to the reference position; the image block set corresponding to the reference position comprises video blocks covered in the panoramic video by an area formed by rotating the diagonal line of a rectangular area visible to a user by 360 degrees around the reference position point as a circle center; the user visible rectangular area is determined by the viewpoint position, the rotation angle around the Z axis and the field angle of the user; the preset list is generated by a server, and the display equipment requests from the server and then stores the preset list locally;
a loading unit configured to load image blocks covered in the target image set.
CN202010095906.1A 2020-02-17 2020-02-17 Image processing method, server and display device Active CN111314739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010095906.1A CN111314739B (en) 2020-02-17 2020-02-17 Image processing method, server and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010095906.1A CN111314739B (en) 2020-02-17 2020-02-17 Image processing method, server and display device

Publications (2)

Publication Number Publication Date
CN111314739A CN111314739A (en) 2020-06-19
CN111314739B true CN111314739B (en) 2022-05-17

Family

ID=71158325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010095906.1A Active CN111314739B (en) 2020-02-17 2020-02-17 Image processing method, server and display device

Country Status (1)

Country Link
CN (1) CN111314739B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466176B (en) * 2020-11-09 2024-06-11 聚好看科技股份有限公司 Panoramic video display method and display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657972A (en) * 2016-12-30 2017-05-10 深圳超多维科技有限公司 Video playing control method and device
CN107945231A (en) * 2017-11-21 2018-04-20 江西服装学院 A kind of 3 D video playback method and device
CN109327699A (en) * 2017-07-31 2019-02-12 华为技术有限公司 A kind of processing method of image, terminal and server
CN110611787A (en) * 2019-06-10 2019-12-24 青岛海信电器股份有限公司 Display and image processing method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4247295B1 (en) * 2008-01-31 2009-04-02 株式会社東芝 Broadcast receiving apparatus, broadcast receiving method and broadcast receiving system
CN102307309A (en) * 2011-07-29 2012-01-04 杭州电子科技大学 Somatosensory interactive broadcasting guide system and method based on free viewpoints
WO2017134706A1 (en) * 2016-02-03 2017-08-10 パナソニックIpマネジメント株式会社 Video display method and video display device
JP6775776B2 (en) * 2017-03-09 2020-10-28 株式会社岩根研究所 Free viewpoint movement display device
CN107105218B (en) * 2017-05-05 2019-05-17 珠海全志科技股份有限公司 A kind of visual field picture image generation method and device
CN109698952B (en) * 2017-10-23 2020-09-29 腾讯科技(深圳)有限公司 Panoramic video image playing method and device, storage medium and electronic device
CN108009588A (en) * 2017-12-01 2018-05-08 深圳市智能现实科技有限公司 Localization method and device, mobile terminal
WO2019123547A1 (en) * 2017-12-19 2019-06-27 株式会社ソニー・インタラクティブエンタテインメント Image generator, reference image data generator, image generation method, and reference image data generation method
CN108833880B (en) * 2018-04-26 2020-05-22 北京大学 Method and device for predicting viewpoint and realizing optimal transmission of virtual reality video by using cross-user behavior mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657972A (en) * 2016-12-30 2017-05-10 深圳超多维科技有限公司 Video playing control method and device
CN109327699A (en) * 2017-07-31 2019-02-12 华为技术有限公司 A kind of processing method of image, terminal and server
CN107945231A (en) * 2017-11-21 2018-04-20 江西服装学院 A kind of 3 D video playback method and device
CN110611787A (en) * 2019-06-10 2019-12-24 青岛海信电器股份有限公司 Display and image processing method

Also Published As

Publication number Publication date
CN111314739A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111698551B (en) Content display method and display equipment
CN113395558B (en) Display equipment and display picture rotation adaptation method
CN111479152A (en) Display device
CN112055256B (en) Image processing method and display device for panoramic image
CN111277911B (en) Image processing method of panoramic video, display device and server
CN111246309A (en) Method for displaying channel list in display device and display device
CN111432257A (en) Method for starting screen protection of display equipment and display equipment
CN111601144A (en) Streaming media file playing method and display equipment
CN113395554B (en) Display device
CN111726674B (en) HbbTV application starting method and display equipment
CN111601142B (en) Subtitle display method and display equipment
CN113115092B (en) Display device and detail page display method
CN111314739B (en) Image processing method, server and display device
CN113395600A (en) Interface switching method of display equipment and display equipment
CN111182339A (en) Method for playing media item and display equipment
CN113573118B (en) Video picture rotating method and display equipment
CN111885415B (en) Audio data rapid output method and display device
CN113115093B (en) Display device and detail page display method
CN113691852B (en) Display equipment and media asset playing method
CN111405329B (en) Display device and control method for EPG user interface display
CN113015006A (en) Display apparatus and display method
CN113382291A (en) Display device and streaming media playing method
CN113542824A (en) Display device and display method of application interface
CN111654744A (en) Sound output amplitude adjusting method and display device
CN111601147A (en) Content display method and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant