CN109510990B - Image processing method and device, computer readable storage medium and electronic device - Google Patents

Image processing method and device, computer readable storage medium and electronic device Download PDF

Info

Publication number
CN109510990B
CN109510990B CN201811253125.XA CN201811253125A CN109510990B CN 109510990 B CN109510990 B CN 109510990B CN 201811253125 A CN201811253125 A CN 201811253125A CN 109510990 B CN109510990 B CN 109510990B
Authority
CN
China
Prior art keywords
image
image frame
video stream
frame
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811253125.XA
Other languages
Chinese (zh)
Other versions
CN109510990A (en
Inventor
解卫博
李静翔
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Information Technology Co Ltd filed Critical Shenzhen Tencent Information Technology Co Ltd
Priority to CN201811253125.XA priority Critical patent/CN109510990B/en
Publication of CN109510990A publication Critical patent/CN109510990A/en
Application granted granted Critical
Publication of CN109510990B publication Critical patent/CN109510990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The invention relates to the technical field of computers, and provides an image processing method and device, a computer readable storage medium and electronic equipment. The image processing method comprises the following steps: generating a plurality of image frames through rendering of a first engine, and encoding and compressing the image frames to form a video stream; sending the video stream to terminal equipment in real time; and receiving the operation information of the video stream sent by the terminal equipment in real time, and processing the operation information through the first engine to realize the control of the image frame. On one hand, the invention can reduce the transmission quantity of data, reduce the blockage and improve the picture quality; on the other hand, the operation of the user on the picture is facilitated, the interaction between the virtual and the reality is really felt, and the user experience is improved.

Description

Image processing method and device, computer readable storage medium and electronic device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer-readable medium, and an electronic device.
Background
In order to enable a user to experience the experience of being personally on the scene and really feel things in a three-dimensional space, a virtual reality technology is widely applied, a virtual world of the three-dimensional space is generated by utilizing computer simulation, various sensory simulation experiences of the user are provided, and when the user executes actions, the computer performs complex operation so as to enhance the real experience of the user in the three-dimensional world.
Common virtual reality technologies are mainly applied to games or virtual shooting, and in order to further improve the experience of users on real effects, users can use mobile devices, such as mobile phones or tablet computers, in the games or virtual shooting, but when game scenes are manufactured by adopting a game engine, powerful CPU and GPU hardware resources are needed, so that the problems of large data transmission amount, blockage, poor image quality and the like exist in the process of virtual reality interaction, and the user experience is poor.
Therefore, there is a need in the art to find a new image processing method and apparatus.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present invention and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide an image processing method and device, a computer readable medium and an electronic device, so that data transmission amount is reduced, jamming is reduced, image quality is improved, and user experience is improved.
Additional features and advantages of the invention will be set forth in the detailed description which follows, or may be learned by practice of the invention.
According to a first aspect of the present invention, there is provided an image processing method comprising:
generating a plurality of image frames through rendering of a first engine, and encoding and compressing the image frames to form a video stream; sending the video stream to terminal equipment in real time; and receiving the operation information of the video stream sent by the terminal equipment in real time, and processing the operation information through the first engine to realize the control of the image frame.
According to a second aspect of the present invention, there is provided an image processing apparatus comprising:
the video stream generation module is used for generating a plurality of image frames through rendering of a first engine and coding and compressing the image frames to form a video stream; the sending module is used for sending the video stream to terminal equipment in real time; and the interaction module is used for receiving the operation information of the video stream sent by the terminal equipment in real time and processing the operation information through the first engine so as to realize the control of the image frame.
In some embodiments of the present invention, based on the foregoing solution, the video stream generating module includes:
a difference pixel determination unit for determining difference pixels between other image frames and a key image frame among the image frames;
a first encoding unit, configured to encode and compress the key image frame and the difference pixels to form the video stream.
In some embodiments of the present invention, the image frame is an image frame in RGB format, and based on the foregoing scheme, the video stream generating module includes:
the format conversion unit is used for converting the RGB format image frame into a YUV format image frame so as to obtain a plurality of target image frames;
and the second coding unit is used for coding and compressing each target image frame to form the video stream.
In some embodiments of the present invention, based on the foregoing scheme, the second encoding unit includes:
the data packet generating unit is used for respectively encoding and compressing each target image frame by a preset encoding method to form an image data packet corresponding to each target image frame; a video stream generating unit configured to form the video stream from each of the image packets.
In some embodiments of the present invention, based on the foregoing solution, the packet generating unit includes:
and the writing unit is used for writing the sequence number corresponding to the image frame and generating the time stamp corresponding to the image data packet when the image data packet is formed.
According to a third aspect of the present invention, there is provided an image processing method comprising:
receiving a video stream, wherein the video stream comprises a plurality of image data packets, and each image data packet is formed by respectively encoding and compressing a plurality of image frames; decoding the image data packet to obtain each image frame; and receiving operation information of the user on the image frame, and sending the operation information to a server.
According to a fourth aspect of the present invention, there is provided an image processing apparatus comprising:
the receiving module is used for receiving a video stream, wherein the video stream comprises a plurality of image data packets, and each image data packet is formed by respectively encoding and compressing a plurality of image frames; a decoding module, configured to decode the image data packet to obtain each image frame; and the data acquisition module is used for receiving the operation information of the user on the image frame and sending the operation information to a server.
In some embodiments of the present invention, the image frame is an image frame in YUV format, and based on the foregoing scheme, the image processing apparatus further includes:
and the format conversion module is used for converting the image frames in the YUV format into image frames in the RGB format after each image frame is obtained.
In some embodiments of the present invention, based on the foregoing scheme, the image data packet includes a sequence number of the image frame and a timestamp corresponding to generation of the image data packet.
In some embodiments of the present invention, based on the foregoing scheme, the image processing apparatus includes:
and the detection module is used for carrying out frame drop detection according to the serial number of the image frame and determining the time interval between the adjacent image frames according to the time stamp.
In some embodiments of the present invention, based on the foregoing solution, the detection module includes:
a frame drop judging unit, configured to judge whether a lost image frame exists in the video stream according to a sequence number of the image frame, where the lost image frame has a target sequence number;
and the image frame selecting unit is used for taking the image frame corresponding to the previous serial number adjacent to the target serial number as the image frame corresponding to the target serial number when the image frame selecting unit judges that the image frame exists.
In some embodiments of the present invention, based on the foregoing scheme, the image processing apparatus includes:
and the control generating module is used for generating a plurality of functional controls based on the second engine and controlling the image frames through the functional controls.
According to a fifth aspect of the present invention, there is provided a computer readable medium, on which a computer program is stored, which program, when executed by a processor, implements the image processing method as described in the above embodiments.
According to a sixth aspect of the present invention, there is provided an electronic apparatus comprising: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method as described in the above embodiments.
According to the image processing method in the present exemplary embodiment, a plurality of image frames are first generated by rendering in a first engine, and the plurality of image frames are encoded and compressed to form a video stream; then, the video stream is sent to the terminal equipment in real time and is displayed on a display screen of the terminal equipment; then, the operation information of the user on the video stream is obtained, and the operation information is processed through the first engine to realize the control on the image frame. On one hand, the invention can reduce the transmission quantity of data, reduce the blockage and improve the picture quality; on the other hand, the operation of the user on the picture is facilitated, the interaction between the virtual and the reality is really felt, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or an image processing apparatus to which an embodiment of the present invention can be applied;
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention;
FIG. 3 shows a flow diagram of an image processing method in an embodiment of the invention;
FIG. 4 is a flow diagram illustrating encoding of compressed image frames into a video stream in one embodiment of the invention;
fig. 5 is a schematic structural diagram illustrating a YUV420sp storage format according to an embodiment of the present invention;
FIG. 6 shows a flow diagram of an image processing method in an embodiment of the invention;
FIG. 7 is a flow chart illustrating the detection of dropped frames according to an embodiment of the present invention;
FIG. 8 illustrates an image displayed in a mobile terminal in one embodiment of the invention;
FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention; .
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities.
I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the related art, a game engine is used for making a game or virtual shooting, but the game engine usually requires a large amount of CPU and GPU hardware resources when running. In virtual shooting or movie and television production, a preview link is very important, in order to really experience the real effect rendered by the engine, a mobile device such as a mobile phone or a tablet computer can be used in the shooting process or the production process, and through a plug-in the game engine, a user can see the picture rendered by the game engine on the mobile terminal and transmit the operation of the user back.
However, the plug-in the related art adopts a scheme of independently compressing and transmitting each frame of picture without considering continuity between pictures, so that the problem of large data transmission quantity exists, and an obvious pause phenomenon exists in a test local area network environment; in addition, in the related art, the processing of phenomena such as dynamic blurring is not good enough, the picture quality of a high-resolution picture is poor, and if the picture quality is changed by improving the resolution, the fluency of network transmission is influenced; finally, only an installation package of the mobile device is provided in the related art, which is inconvenient for a user to add more user interface controls (UI controls) at the mobile device end.
In view of the problems in the related art, the present invention provides an image processing method and an image processing apparatus.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which the image processing method or the image processing apparatus of the embodiment of the present invention can be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 103 may be a server cluster composed of a plurality of servers.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or transmit data or the like. The terminal device 101 may be various electronic devices having a hard disk, including but not limited to a tablet computer, a smart phone, a portable computer, a desktop computer, and the like.
The server 103 may be a server that provides various services. A first engine is loaded on the server 103, a plurality of image frames can be generated through rendering of the first engine, then a coding compression program in the server 103 codes and compresses the plurality of image frames to generate a plurality of image data packets corresponding to the image frames, then a video stream is formed according to the image data packets, and the video stream is sent to the terminal device 101 in real time; after receiving the video stream, the terminal device 101 decodes the video stream to obtain a display image, and displays the display image on a display screen, a user can operate the display image through a user interface control button or an external device such as a mouse, a keyboard, a touch screen pen or the like or a finger at any position of the display screen, and the terminal device 101 collects operation information of the user on the display image and sends the operation information to the server 103; after receiving the operation information of the user, the server 103 processes the operation information through the first engine to realize control over the image frame.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement embodiments of the present invention. The electronic device can execute the image processing method in the above embodiment of the present invention to realize the control of image quality; the computer system shown in fig. 2 may be applied to the aforementioned terminal apparatus 101 and/or server 103.
It should be noted that the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiment of the present invention.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for system operation are also stored. The CPU201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 208 including a hard disk and the like; and a communication section 209 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 210 as necessary, so that a computer program read out therefrom is mounted into the storage section 208 as necessary.
In particular, according to an embodiment of the present invention, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The computer program executes various functions defined in the system of the present invention when executed by a Central Processing Unit (CPU) 201.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present invention also provides a computer-readable storage medium, which may be contained in the terminal device or the server described in the above embodiments; or may exist separately without being assembled into the terminal device or the server. The above-mentioned computer-readable storage medium carries one or more programs which, when executed by a terminal device or a server, cause the terminal device or the server to implement a method as described in the embodiments described below. For example, a terminal device or server may implement the various steps shown in fig. 3-8.
In an embodiment of the present invention, an image processing method is first provided to perform optimization processing on existing problems, specifically referring to fig. 3, the image processing method at least includes the following steps:
step S310: generating a plurality of image frames through rendering of a first engine, and encoding and compressing the image frames to form a video stream;
step S320: sending the video stream to terminal equipment in real time;
step S330: and receiving the operation information of the video stream sent by the terminal equipment in real time, and processing the operation information through the first engine to realize the control of the image frame.
According to the image processing method in the exemplary embodiment, a plurality of image frames generated by rendering of a first engine are encoded and compressed to form a video stream, and the video stream is transmitted to a terminal device in real time; and then receiving the operation information of the video stream sent by the terminal equipment and processing the operation information through the first engine to realize the control of the image frame. On one hand, the invention can reduce the data transmission quantity and reduce the blockage; on the other hand, the dynamic blur can be improved, the picture quality is improved, and the user experience is further improved.
Next, the image processing method in the present exemplary embodiment is further explained.
In step S310, a plurality of image frames are generated through first engine rendering, and each of the image frames is encoded and compressed to form a video stream.
In this exemplary embodiment, the first engine may be a game engine, the game engine refers to a core component of some programmed editable computer game systems or some interactive real-time image applications, the game engine used in the present invention may be an unregeal, an Ogre, a Unity, a Gamebryo, and the like, a plurality of image frames may be generated by rendering through the first engine, and after the plurality of image frames are obtained, the plurality of image frames may be encoded and compressed to form a video stream.
In the present exemplary embodiment, when encoding and compressing each image frame to form a video stream, a preset video encoding software may be used, for example, ffmpeg encoding and decoding software may be used, which is a free software and can run recording, converting and streaming functions of multiple formats of audio and video, including decoding/compression libraries of audio and video. Specifically, the image frames may be encoded and compressed by a video compression algorithm in the video encoding software, such as a video compression algorithm H264, X264, etc., to form a video stream, where H264, X264 are digital video compression formats, which are also a video codec standard.
In the present exemplary embodiment, fig. 4 shows a flowchart of encoding compressed image frames to form a video stream, and as shown in fig. 4, in the process of encoding and compressing image frames to form a video stream, in step S401, difference pixels between other image frames and a key image frame in the plurality of image frames are determined; in step S402, the key image frame and the difference pixels are subjected to encoding compression to form the video stream. It is noted that, in the present invention, the key image frame may be obtained first, and then the difference pixels between the other image frames and the key image frame are determined during the encoding and compressing process, and the encoding and compressing process is performed. In the process of encoding and compressing the image frame to form the video stream, the same pixels are compressed only once, so the compression amount is reduced, and compared with the prior art that each frame of image is independently compressed, the image frame is compressed to form the video stream, the data transmission amount is greatly reduced, the bandwidth is saved, and the occurrence of the pause phenomenon is further reduced.
In step S320, the video stream is sent to the terminal device in real time.
In the present exemplary embodiment, after the video stream is formed, the video stream may be transmitted to the terminal device 101 in real time, and the image frames in the video stream are displayed in the display screen of the terminal device 101 for the user to view and operate. The terminal device may be a mobile terminal device such as a mobile phone and a tablet computer, or may be other commonly used terminal devices. The server 101 can transmit the video stream formed by compressing the picture frame rendered by the first engine to the mobile terminal device through the wireless network, and the mobile terminal device can also transmit the operation information of the user on the picture frame to the server through the wireless network so as to be processed by the first engine, thereby realizing the interaction of virtual and reality.
In the present exemplary embodiment, after forming the video stream, the video stream may be transmitted to the terminal device 101 according to a preset transmission protocol, where the transmission protocol may be a real-time data transmission protocol such as UDP protocol, where UDP protocol (user data packet protocol) is a connectionless protocol for processing data packets, and in the present invention, the type of the data transmission protocol is not particularly limited. As a specific embodiment of the present invention, the video stream may be transmitted to the terminal apparatus 101 through the UDP protocol.
In step S330, receiving operation information of the video stream sent by the terminal device in real time, and processing the operation information through the first engine to implement control over the image frame.
In the present exemplary embodiment, the video stream is transmitted from the server 103 to the terminal apparatus 101, and the user can view the image frames in the video stream through the display screen of the terminal apparatus 101 and operate on the image frames. When the image frame is operated, a corresponding event can be triggered by clicking a button graph positioned in the image frame, and any position of the image frame can be clicked to trigger the corresponding event. The server 103 receives the operation information of the video stream sent by the terminal device 101 and processes the operation information through the first engine to realize the control of the image frame. For example, the operation information input by the user may be a single click, a double click, or the like, when the operation information is a single click, the control in the terminal device 101 may analyze a specific event and a position corresponding to the single click, and form an operation information data packet, and the operation information data packet may be sent from the terminal device 101 to the server 103; then, the first engine loaded in the server 103 may determine a corresponding position in the image frame generated by the first engine rendering according to the position in the operation information data packet, and perform a corresponding action according to a specific event, to process the image frame, for example, may scale the image, switch the image frame, and so on; and finally, returning the processed image frame to the terminal device 101. When the operation information is a double click, the first engine may perform a corresponding action according to the double click event, such as may increase image brightness, resolution, and the like at the position of the double click. Of course, the operation information may also be other operations, and the present invention is not limited to this.
In this exemplary embodiment, in the process of forming a video stream by encoding and compressing image frames by using preset video encoding software, the preset parameters of the video encoding software may be adjusted to ensure video quality, accelerate the speed of encoding and compressing, and reduce the delay of the video stream, where the parameters of the video encoding software may include a bit rate, a width and a height of a picture during encoding and compressing, a video frequency, how many frames are required to recover a normal picture after a frame is dropped, a number of delayed frames input from an encoder to a decoder, a number of threads of encoding and compressing, an encoding format, an encoding and compressing algorithm, and the like.
Further, in the encoding compression of the image frame, the encoding compression may be performed by a single thread or multiple threads, and the encoding compression threads may be formed in the first engine using the fastynctask and fnonabandenabletask classes to perform the encoding compression. In the invention, an asynchronous mode can be adopted for coding and compressing, so that the compression coding thread is independent of the main thread, the independent coding compression thread can not block the main thread, and the data transmission efficiency is improved.
In this exemplary embodiment, the image frames generated by rendering by the first engine are image frames in RGB format, and in the process of encoding and compressing the image frames to form a video stream, the image frames in RGB format may be first converted into image frames in YUV format, the image frames in YUV format are used as target image frames, and then the target image frames are encoded and compressed to form a video stream. The YUV format is a pixel format in which a luminance parameter and a chrominance parameter are expressed separately, and the advantage of separating the luminance parameter and the chrominance parameter is that not only mutual interference can be avoided, but also the sampling rate of chrominance can be reduced without greatly affecting the image quality. Because YUV does not require the simultaneous transmission of three independent video signals like RGB, the transmission by converting the image frame into YUV format can occupy very little bandwidth and save a great deal of resources. The YUV format is divided into two categories of planar and packed, and for the YUV format of planar, the brightness of all pixel points is continuously stored firstly, the colors of all pixel points are stored subsequently, and then the saturation of all pixel points is stored; for the packed YUV format, the brightness, color and saturation of each pixel are stored in a continuous and crossed manner. For image frames in YUV format, storage formats such as YUV422P, YUV420sp and the like can be used for storage, and fig. 5 shows a structural schematic diagram of the YUV420sp storage format, as shown in fig. 5, wherein brightness (Y) information is stored independently, and color (U) and saturation (V) are stored in an alternating mode of UVUV.
In the present exemplary embodiment, in encoding and compressing image frames to form a video stream, one corresponding image data packet is generated for each image frame, and then the video stream may be formed from a plurality of image data packets. That is, when a video stream is transmitted according to the UDP transport protocol, it is essentially to transmit a plurality of UDP packets.
Further, when forming the image data packet, a sequence number corresponding to the image frame and a timestamp corresponding to the generated image data packet may be written into the image frame, and specifically, the image frame may be numbered while the image frame is generated, so that each image frame has a corresponding sequence number; and then, acquiring a corresponding time stamp according to the image frame image data packet, and adding the sequence number of the image frame and the time stamp corresponding to the image data packet into the data of the image data packet. The time interval between adjacent image frames can be determined through the time stamps corresponding to the image data packets, and whether the image frames drop or not can be detected according to the sequence numbers corresponding to the image frames.
In an embodiment of the present invention, an image processing method is further provided to perform optimization processing on existing problems, specifically referring to fig. 6, the image processing method at least includes the following steps:
step S610: receiving a video stream, wherein the video stream comprises a plurality of image data packets, and each image data packet is formed by respectively encoding and compressing a plurality of image frames.
In the present exemplary embodiment, the first engine rendering loaded in the server 103 generates a plurality of image frames, and a video stream may be formed from the plurality of image data packets by encoding and compressing the plurality of image frames or by performing image format conversion and then encoding and compressing the image frames to generate a plurality of image data packets corresponding to the image frames. To enable the user to view and operate on the image frames generated by the first engine rendering, a video stream may be sent from the server 103 to the terminal device 101.
Step S620: decoding the image data packet to obtain each image frame;
in the present exemplary embodiment, after the terminal device 101 receives the video stream, the image data packets in the video stream may be decoded to obtain image frames, and the image frames may be presented on the display screen of the terminal device 101 for the user to view and perform operations. The decoding operation is the inverse operation of the encoding compression process in step S610, for example, when the image frame is compressed by using the X264 video encoding algorithm to form the image data packet, and the video stream is formed according to the image data packet, then the image data packet can be decoded by using the X264 video decoding algorithm to obtain the image frame accordingly.
In the present exemplary embodiment, the image frame obtained after decoding the image data packet may be an image frame in YUV format, and in order to present the image frame on the display screen of the terminal device 101 for the user to view, the image format of the image frame may be converted, for example, the image frame in YUV format is converted into the image frame in RGB format.
Step S630: and receiving operation data of the image frame by a user, and sending the operation data to a server.
In the present exemplary embodiment, after the terminal device 101 decodes the video stream to obtain the image frames, a plurality of image frames may be presented in the display screen of the terminal device 101, and the user may operate on the image frames after seeing the image frames. The user may input operation information through an external input device connected to the terminal device 101, such as a keyboard, a mouse, a touch-screen pen, and the like, and may also input operation information through a finger touch, and the operation information may be any position information or motion information, for example, a touch-screen position, a single click, a double click, and the like, which is not limited in this invention. After acquiring input information of a user, the terminal device 101 sends the input information to the server 103, so that the first engine processes the input information, controls image frames, and further realizes interaction between virtual and reality.
In the present exemplary embodiment, during the process of encoding and compressing the image frames to form the image data packets, the sequence numbers corresponding to the image frames and the timestamps for generating the image data packets may be written, wherein the sequence numbers corresponding to the image frames are used for performing frame drop detection, and the timestamps are used for determining the time intervals between the adjacent image frames.
Further, fig. 7 shows a schematic flow chart of frame drop detection, and as shown in fig. 7, in step S701, it is determined whether there is a missing image frame in the video stream according to the sequence number of the image frame, where the missing image frame has a target sequence number. Because the serial numbers have continuity, whether the image frames are lost or not can be judged according to whether the serial numbers are continuous or not, and a target serial number corresponding to the lost image frame is obtained; in step S702, if it is determined that there is any image frame, an image frame corresponding to a previous serial number adjacent to the target serial number is set as an image frame corresponding to the target serial number. If the image frame is lost, the image frame may be jammed and noisy when being displayed in the terminal device 101, and the image frame corresponding to the adjacent previous serial number is used as the image frame corresponding to the target serial number, so that the jamming and noisy phenomenon may be reduced.
In this exemplary embodiment, a second engine may be loaded in the terminal device 101, and the second engine may enable a user to add a plurality of function controls, such as an exit button, an information display box (a frame rate display box, a power display box, a frame drop prompt box), and the like, on an interface of the terminal device 101, so that the user can conveniently control the image frame through the function controls. Fig. 8 shows an image displayed in the mobile terminal, and as shown in fig. 8, the image displayed in the mobile terminal is consistent with an image generated by rendering at the server, and by clicking a function button in the mobile terminal, a click event can be transmitted to the server, and the first engine is enabled to process an image frame according to the click event, so as to implement real and virtual interaction. As a specific embodiment, Cocos2d can be used as the second engine in the invention, Cocos2d has the characteristics of simple structure, multi-language and multi-platform support and small installation package, and is also convenient for a user to add a function control and transmit the operation information of the user on the image frame to the first engine.
In the present exemplary embodiment, in order to increase the decoding speed, reduce power consumption, and match the language environment (C + +) of the Cocos2d, a thread may be newly established in the Cocos2d, and decoding may be performed by a decoder of the terminal device 101 itself, for example, for an Android system, decoding may be performed by a native codec of the Android system itself. After decoding, the custom shader may be written for the sprite control, and the image frame in YUV format is converted to an image frame in RGB format by an image processor (GPU) in the terminal device 101 and presented in the display screen of the terminal device 101. The testing is performed by the three-star Note5 mobile phone, and under the condition that the final resolution is set to 960 × 540, the frame rate can be stabilized at 30fps, the occupancy rate of the CPU is about 15%, the use for more than 3 continuous hours by the user can be satisfied, and the picture quality can be further improved for the mobile device equipped with the stronger GPU.
In the image processing method, after receiving the video stream, the terminal device decodes the video stream and converts the image format to obtain the image frame generated by rendering of the first engine, and a user can watch the image frame in the terminal device, particularly the mobile terminal device, and input operation information to enable the server terminal to process the image frame according to the input operation information to enable the image frame to generate corresponding change, and return the processed image frame to the user, thereby realizing the interaction between virtual and reality. In addition, by loading the second engine in the terminal equipment, the user can customize the function button so as to debug.
The image processing method of the present invention is explained below with remote interaction in the game production process: firstly, a plurality of game image frames are generated by rendering through a game virtual engine mounted in a server according to the operation of a game developer; since the format of the game image frame generated by rendering is usually RGB format, and in the image in YUV format, the luminance parameter and the chrominance parameter are separated, in order to improve the compression quality of the image, the image format of the game image frame can be converted into YUV format; then, respectively carrying out compression coding on a plurality of game image frames in YUV format through video coding software to generate a plurality of image data packets and forming a video stream, wherein each image data packet comprises a serial number of the image frame and a time stamp for generating the image data packet; then, the video stream is remotely sent to a mobile terminal device according to a transmission protocol, so that a user can remotely control the game image frame through the mobile terminal device. After the mobile terminal equipment sequentially receives each image data packet forming the video stream, firstly, decoding each image data packet through video coding software to obtain a game image frame in the image data packet, wherein the decoding operation is the inverse operation of compression coding when the image data packet is generated; then, converting the image format of the game image frame, converting the YUV format into the RGB format and displaying the RGB format on a display screen of the mobile terminal equipment so that a user can watch the game picture conveniently; furthermore, a plurality of control buttons can be arranged in the user interface through another virtual engine arranged in the mobile terminal device, so that a user can process the game image frames through touch screen operation or click the control buttons to trigger corresponding events for the game image frames, and after the mobile terminal device receives operation data of the user, the operation data can be sent to the server, so that the game virtual engine in the server can perform corresponding processing on the game image frames, and the remote control of the user on the images is realized. According to the embodiment of the invention, the rendered image frame is compressed into the high-quality video stream in real time and is transmitted to the mobile terminal equipment through the transmission protocol, so that a user of the mobile terminal equipment can see the high-quality picture with small loss in real time, and the user experience is further improved.
Embodiments of the apparatus of the present invention will be described below, which can be used to perform the above-mentioned image processing method of the present invention. For details that are not disclosed in the embodiments of the present invention, refer to the embodiments of the image processing method of the present invention.
Fig. 9 shows a schematic configuration diagram of an image processing apparatus, and referring to fig. 9, the image processing apparatus 900 may include: a video stream generating module 901, a sending module 902 and an interacting module 903.
Specifically, the video stream generating module 901 is configured to generate a plurality of image frames through rendering by a first engine, and encode and compress each of the image frames to form a video stream; a sending module 902, configured to send the video stream to a terminal device in real time; and the interaction module 903 is configured to receive operation information of the video stream sent by the terminal device in real time, and process the operation information through the first engine to implement control over the image frame.
In the present exemplary embodiment, the video stream generation module 901 includes a difference pixel determination unit and a first encoding unit.
Specifically, a difference pixel determination unit for determining difference pixels between other image frames of the plurality of image frames and a key image frame; a first encoding unit, configured to encode and compress the key image frame and the difference pixels to form the video stream.
In this example embodiment, the image frames are RGB image frames, and the video stream generating module 901 includes a format converting unit, configured to convert the RGB image frames into YUV image frames to obtain a plurality of target image frames; and the second coding unit is used for coding and compressing each target image frame to form the video stream.
In the present exemplary embodiment, the second encoding unit includes a packet generation unit and a video stream generation unit.
Specifically, the data packet generating unit is configured to encode and compress each target image frame by a preset encoding method to form an image data packet corresponding to each target image frame; a video stream generating unit configured to form the video stream from each of the image packets.
In the present exemplary embodiment, the packet generation unit includes a writing unit configured to write a sequence number corresponding to the image frame and a time stamp corresponding to generation of the image packet when the image packet is formed.
Fig. 10 shows a schematic configuration diagram of an image processing apparatus, and referring to fig. 10, an image processing apparatus 1000 may include: a receiving module 1001, a decoding module 1002 and a data acquisition module 1003.
Specifically, the receiving module 1001 is configured to receive a video stream, where the video stream includes a plurality of image data packets, and each image data packet is formed by respectively encoding and compressing a plurality of image frames; a decoding module 1002, configured to decode the image data packet to obtain each image frame; the data obtaining module 1003 is configured to receive operation information of the image frame from a user, and send the operation information to a server.
In the present exemplary embodiment, the image frames are image frames in YUV format, fig. 11 shows an image processing apparatus, and as shown in fig. 11, the image processing apparatus 1000 may further include a format conversion module 1004, configured to convert the image frames in YUV format into image frames in RGB format after obtaining each of the image frames.
In this example embodiment, the image data packet includes a sequence number of the image frame and a timestamp corresponding to generation of the image data packet.
Fig. 12 shows an image processing apparatus, and as shown in fig. 12, the image processing apparatus 1000 may further include a detection module 1005, configured to perform frame drop detection according to the sequence numbers of the image frames, and determine a time interval between adjacent image frames according to the timestamps.
In the present exemplary embodiment, the detection module 1005 includes a dropped frame determination unit and an image frame selection unit.
Specifically, the frame dropping judgment unit is configured to judge whether the video stream is stored according to the sequence number of the image frame
At a lost image frame, the lost image frame having a target sequence number; and the image frame selecting unit is used for taking the image frame corresponding to the previous serial number adjacent to the target serial number as the image frame corresponding to the target serial number when the image frame selecting unit judges that the image frame exists.
Fig. 12 shows an image processing apparatus, and as shown in fig. 12, the image processing apparatus 1000 may further include a control generating module 1006, configured to generate a plurality of functional controls based on the second engine, and control the image frame through the functional controls.
Since each functional module of the image processing apparatus according to the exemplary embodiment of the present invention corresponds to the step of the exemplary embodiment of the image processing method described above, the description thereof is omitted here.
It should be noted that although in the above detailed description several modules or units of the image processing apparatus are mentioned, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is only limited by the appended claims.

Claims (15)

1. An image processing method, comprising:
generating a plurality of image frames through rendering of a first engine, and encoding and compressing the image frames to form a video stream;
sending the video stream to terminal equipment in real time;
and receiving operation information of a user on an image frame in the video stream, which is sent by the terminal device in real time, and processing the operation information through the first engine to realize control on the image frame, so that the image frame realizes any change of switching, scaling, brightness adjustment or resolution adjustment.
2. The image processing method of claim 1, wherein generating a plurality of image frames by a first engine rendering, and encoding and compressing each of the image frames to form a video stream comprises:
determining difference pixels between other image frames and a key image frame in each image frame;
the key image frame and the difference pixels are code compressed to form the video stream.
3. The image processing method according to claim 1, wherein the image frame is an image frame in RGB format, a plurality of image frames are generated by rendering in the first engine, and each image frame is encoded and compressed to form a video stream, and the method comprises:
converting the image frame in the RGB format into an image frame in a YUV format to obtain a plurality of target image frames;
and carrying out coding compression on each target image frame to form the video stream.
4. The image processing method according to claim 3, wherein encoding and compressing each of the target image frames to form the video stream comprises:
respectively carrying out coding compression on each target image frame by a preset coding method to form an image data packet corresponding to each target image frame;
the video stream is formed from each of the image data packets.
5. The image processing method according to claim 4, wherein the encoding and compressing each of the target image frames by a preset encoding method to form an image data packet corresponding to each of the target image frames comprises:
and when the image data packet is formed, writing a sequence number corresponding to the image frame and generating a time stamp corresponding to the image data packet.
6. An image processing method, comprising:
receiving a video stream, wherein the video stream comprises a plurality of image data packets, and each image data packet is formed by respectively encoding and compressing a plurality of image frames;
decoding the image data packet to obtain each image frame;
receiving operation information of a user on the image frame, and sending the operation information to a server, so that the server processes the operation information through a first engine to realize control on the image frame, and the image frame is changed in any one of switching, scaling, brightness adjustment and resolution adjustment.
7. The image processing method of claim 6, wherein the image frame is an image frame in YUV format, the method further comprising:
and after each image frame is obtained, converting the image frame in the YUV format into an image frame in an RGB format.
8. The image processing method of claim 6, wherein the image data packet comprises a sequence number of the image frame and a timestamp corresponding to generation of the image data packet.
9. The image processing method according to claim 8, further comprising:
and performing frame drop detection according to the sequence number of the image frame, and determining the time interval between the adjacent image frames according to the time stamp.
10. The image processing method according to claim 9, wherein performing frame drop detection according to the sequence number of the image frame comprises:
judging whether a lost image frame exists in the video stream according to the sequence number of the image frame, wherein the lost image frame has a target sequence number;
and if so, taking the image frame corresponding to the previous serial number adjacent to the target serial number as the image frame corresponding to the target serial number.
11. The image processing method according to claim 6, characterized in that the method further comprises:
and generating a plurality of function controls based on a second engine, and controlling the image frames through the function controls.
12. An image processing apparatus characterized by comprising:
the video stream generation module is used for generating a plurality of image frames through rendering of a first engine and coding and compressing the image frames to form a video stream;
the sending module is used for sending the video stream to terminal equipment in real time;
and the interaction module is used for receiving operation information of a user on an image frame in the video stream, which is sent by the terminal device in real time, and processing the operation information through the first engine to realize control on the image frame, so that the image frame can realize any change of switching, scaling, brightness adjustment or resolution adjustment.
13. An image processing apparatus characterized by comprising:
the receiving module is used for receiving a video stream, wherein the video stream comprises a plurality of image data packets, and each image data packet is formed by respectively encoding and compressing a plurality of image frames;
a decoding module, configured to decode the image data packet to obtain each image frame;
the data acquisition module is used for receiving operation information of a user on the image frame and sending the operation information to a server, so that the server processes the operation information through a first engine to realize control on the image frame, and the image frame is changed in any one of switching, scaling, brightness adjustment and resolution adjustment.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 11.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the image processing method according to any one of claims 1 to 11.
CN201811253125.XA 2018-10-25 2018-10-25 Image processing method and device, computer readable storage medium and electronic device Active CN109510990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811253125.XA CN109510990B (en) 2018-10-25 2018-10-25 Image processing method and device, computer readable storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811253125.XA CN109510990B (en) 2018-10-25 2018-10-25 Image processing method and device, computer readable storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN109510990A CN109510990A (en) 2019-03-22
CN109510990B true CN109510990B (en) 2022-03-29

Family

ID=65746070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811253125.XA Active CN109510990B (en) 2018-10-25 2018-10-25 Image processing method and device, computer readable storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN109510990B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490962B (en) * 2019-08-20 2023-09-15 武汉邦拓信息科技有限公司 Remote rendering method based on video stream
CN110634564B (en) * 2019-09-16 2023-01-06 腾讯科技(深圳)有限公司 Pathological information processing method, device and system, electronic equipment and storage medium
CN110827380B (en) * 2019-09-19 2023-10-17 北京铂石空间科技有限公司 Image rendering method and device, electronic equipment and computer readable medium
CN110865787A (en) * 2019-11-25 2020-03-06 京东方科技集团股份有限公司 Image processing method, server, client and image processing system
CN111836116A (en) * 2020-08-06 2020-10-27 武汉大势智慧科技有限公司 Network self-adaptive rendering video display method and system
CN112546633A (en) * 2020-12-10 2021-03-26 网易(杭州)网络有限公司 Virtual scene processing method, device, equipment and storage medium
CN114897758A (en) * 2021-01-26 2022-08-12 腾讯科技(深圳)有限公司 Image frame loss detection method, device, equipment and storage medium
CN113163259A (en) * 2021-05-10 2021-07-23 宝宝巴士股份有限公司 FFmpeg-based video node rendering method and device
CN113760431B (en) * 2021-08-30 2024-03-29 百度在线网络技术(北京)有限公司 Application control method and device, electronic equipment and readable storage medium
CN113784094A (en) * 2021-08-31 2021-12-10 上海三旺奇通信息科技有限公司 Video data processing method, gateway, terminal device and storage medium
CN114445264B (en) * 2022-01-25 2022-11-01 上海秉匠信息科技有限公司 Texture compression method and device, electronic equipment and computer readable storage medium
CN114626975A (en) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 Data processing method, apparatus, device, storage medium and program product
WO2023245495A1 (en) * 2022-06-22 2023-12-28 云智联网络科技(北京)有限公司 Method and apparatus for converting rendered data into video stream, and electronic device
CN115225615B (en) * 2022-06-30 2024-02-23 如你所视(北京)科技有限公司 Illusion engine pixel streaming method and device
CN115379207A (en) * 2022-08-24 2022-11-22 中国第一汽车股份有限公司 Camera simulation method and device, electronic equipment and readable medium
CN116778046B (en) * 2023-08-28 2023-10-27 乐元素科技(北京)股份有限公司 Hair model processing method, device, equipment and medium based on multithreading

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101340598A (en) * 2008-08-07 2009-01-07 北京衡准科技有限公司 Method and apparatus for implementing three-dimensional playing of media
CN103096128B (en) * 2011-11-07 2016-07-06 中国移动通信集团公司 A kind of realize the method for video interactive, server, terminal and system
WO2016049187A1 (en) * 2014-09-23 2016-03-31 Lincolnpeak Systems, methods, and software for processing a question relative to one or more of a plurality of population research databases
CN105791977B (en) * 2016-02-26 2019-05-07 北京视博云科技有限公司 Virtual reality data processing method, equipment and system based on cloud service
CN107979763B (en) * 2016-10-21 2021-07-06 阿里巴巴集团控股有限公司 Virtual reality equipment video generation and playing method, device and system
CN108668167B (en) * 2017-03-28 2021-01-15 中国移动通信有限公司研究院 Video restoration method and device
CN107529711A (en) * 2017-08-01 2018-01-02 杭州安恒信息技术有限公司 The display methods and device of Streaming Media

Also Published As

Publication number Publication date
CN109510990A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN109510990B (en) Image processing method and device, computer readable storage medium and electronic device
US10200744B2 (en) Overlay rendering of user interface onto source video
US11200426B2 (en) Video frame extraction method and apparatus, computer-readable medium
US20220014819A1 (en) Video image processing
CN104244088B (en) Display controller, screen picture transmission device and screen picture transfer approach
CN101505365B (en) Real-time video monitoring system implementing method based on network television set-top box
KR101596505B1 (en) Apparatus and method of an user interface in a multimedia system
CN110868625A (en) Video playing method and device, electronic equipment and storage medium
CN110012333A (en) Video frame in video flowing is transmitted to the method and related device of display
CN110827380A (en) Image rendering method and device, electronic equipment and computer readable medium
EP2953370A1 (en) Minimizing input lag in a remote GUI TV application
CN111083450A (en) Vehicle-mounted-end image remote output method, device and system
CN113497932B (en) Method, system and medium for measuring video transmission time delay
KR20160015128A (en) System for cloud streaming service, method of cloud streaming service based on type of image and apparatus for the same
CN114938461A (en) Video processing method, device and equipment and readable storage medium
CN113099308B (en) Content display method, display equipment and image collector
CN114339344B (en) Intelligent device and video recording method
KR20110071736A (en) Mobile device remote sharing apparatus and method
US11368743B2 (en) Telestration capture for a digital video production system
KR20160011158A (en) Screen sharing system and method
WO2023193524A1 (en) Live streaming video processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN117119233A (en) Display device and video uploading method
JP6067085B2 (en) Screen transfer device
CN115801878A (en) Cloud application picture transmission method, equipment and storage medium
CN115706828A (en) Data processing method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant