CN115866254A - Method and equipment for transmitting video frame and camera shooting parameter information - Google Patents

Method and equipment for transmitting video frame and camera shooting parameter information Download PDF

Info

Publication number
CN115866254A
CN115866254A CN202211481736.6A CN202211481736A CN115866254A CN 115866254 A CN115866254 A CN 115866254A CN 202211481736 A CN202211481736 A CN 202211481736A CN 115866254 A CN115866254 A CN 115866254A
Authority
CN
China
Prior art keywords
parameter information
information
byte
video frames
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211481736.6A
Other languages
Chinese (zh)
Inventor
袁科
黄海波
陈嘉伟
陈海钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Hiscene Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiscene Information Technology Co Ltd filed Critical Hiscene Information Technology Co Ltd
Priority to CN202211481736.6A priority Critical patent/CN115866254A/en
Publication of CN115866254A publication Critical patent/CN115866254A/en
Priority to PCT/CN2023/121056 priority patent/WO2024109317A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application aims to provide a method and equipment for transmitting video frames and shooting parameter information, and the method and equipment comprise the following steps: acquiring a plurality of video frames shot by a camera device and at least one piece of camera parameter information when the video frames are shot; generating a coding sequence according to the plurality of video frames and the at least one shooting parameter information, wherein the coding sequence comprises a plurality of video frame coding units and at least one shooting parameter coding unit, and each shooting parameter coding unit comprises supplementary enhancement information used for indicating corresponding shooting parameter information; transmitting the encoded sequence to a computer device. The method and the device can realize the synchronization of the image sequence frame level of the shooting parameter information of the shooting device and the collected video frame, and can realize the hyper-real-environment AR rendering label under the mobile video scene, thereby achieving the real-time synchronization of the stream pushing end and the stream pulling end, and being beneficial to the real-time action command in the application scene.

Description

Method and equipment for transmitting video frame and camera shooting parameter information
Technical Field
The present application relates to the field of communications, and in particular, to a technique for transmitting video frames and camera parameter information.
Background
In recent years, with rapid development of scientific technology, augmented Reality (AR) technology gradually comes into the field of vision of people. The method is particularly based on real-time video stream AR interaction and is rapidly applied to various industries. Along with the popularization of the metauniverse and the command system based on the AR real-time video stream, the expert guides that the movable monitoring device (unmanned aerial vehicle, glasses, mobile phone, fixed camera and high-point camera) also shows more and more application prospects. Particularly, for example, the unmanned aerial vehicle has high flying and far seeing, and is similar to real-time video transmission of the angle of view of the god, real-time transmission of the first angle of view of the AR glasses, real-time transmission of the fixed camera of the skynet, and real-time transmission of the parameters inside and outside the camera, which form a heaven-earth integrated real-time command system.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for transmitting video frames and camera parameter information.
According to an aspect of the present application, a method for transmitting video frames and imaging parameter information is provided, and the method is applied to a transmission device, the transmission device is connected with an imaging apparatus, wherein the method comprises:
acquiring a plurality of video frames shot by the camera device and at least one piece of camera shooting parameter information when the video frames are shot;
generating an encoding sequence according to the plurality of video frames and the at least one shooting parameter information, wherein the encoding sequence comprises a plurality of video frame encoding units and at least one shooting parameter encoding unit, and each shooting parameter encoding unit comprises supplementary enhancement information used for indicating corresponding shooting parameter information;
transmitting the encoded sequence to a computer device.
According to another aspect of the present application, a method for transmitting video frames and camera parameter information is applied to a computer device, the computer device comprises a display device, wherein the method comprises:
receiving a coding sequence of a plurality of video frames which are sent by corresponding transmission equipment and shot by a camera device, wherein the coding sequence comprises a plurality of video frame coding units and at least one camera parameter coding unit, and each camera parameter coding unit comprises supplementary enhancement information used for indicating corresponding camera parameter information;
and decoding the coded sequence, and acquiring and displaying a plurality of corresponding video frames through the display device.
According to an aspect of the present application, there is provided a system and method for transmitting video frames and camera parameter information, the method comprising:
the method comprises the steps that a transmission device obtains a plurality of video frames shot by a corresponding camera device and at least one piece of camera shooting parameter information when the video frames are shot; generating an encoding sequence according to the plurality of video frames and the at least one shooting parameter information, wherein the encoding sequence comprises a plurality of video frame encoding units and at least one shooting parameter encoding unit, and each shooting parameter encoding unit comprises supplementary enhancement information used for indicating corresponding shooting parameter information; and transmitting the encoded sequence to a computer device;
and the computer equipment receives the coding sequence of the plurality of video frames sent by the transmission equipment and shot by the camera device, decodes the coding sequence, acquires and displays the corresponding plurality of video frames through the corresponding display device.
According to another aspect of the present application, there is provided an apparatus for transmitting video frames and imaging parameter information, the apparatus being connected to an imaging device, wherein the apparatus comprises:
the one-to-one module is used for acquiring a plurality of video frames shot by the camera device and at least one piece of camera parameter information when the video frames are shot;
a second module, configured to generate a coding sequence according to the multiple video frames and the at least one shooting parameter information, where the coding sequence includes multiple video frame coding units and at least one shooting parameter coding unit, and each shooting parameter coding unit includes supplemental enhancement information used for indicating corresponding shooting parameter information;
a tri-module for transmitting the encoded sequence to a computer device:
according to an aspect of the present application, there is provided a computer apparatus for transmitting video frames and camera parameter information, the computer apparatus including a display device, wherein the computer apparatus includes:
the device comprises a first module, a second module and a third module, wherein the first module is used for receiving a coding sequence of a plurality of video frames which are sent by corresponding transmission equipment and shot by a camera device, the coding sequence comprises a plurality of video frame coding units and at least one camera parameter coding unit, and each camera parameter coding unit comprises supplementary enhancement information used for indicating corresponding camera parameter information;
and the second module is used for decoding the coding sequence, acquiring and displaying a plurality of corresponding video frames through the display device.
According to an aspect of the present application, there is provided a computer apparatus, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the steps of the method as described in any one of the above.
According to an aspect of the application, there is provided a computer readable storage medium having stored thereon a computer program/instructions, characterized in that the computer program/instructions, when executed, cause a system to perform the steps of performing the method as described in any of the above.
According to an aspect of the application, there is provided a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the steps of the method as described in any of the above.
Compared with the prior art, the method and the device have the advantages that the corresponding supplementary enhancement information is written in the shooting parameter information while the video frame is transmitted, and the shooting parameter information and the video frame are transmitted to the computer equipment together for displaying the video frame and simultaneously rendering the video frame in real time. The method and the device can realize the synchronization of the image sequence frame level of the shooting parameter information of the shooting device and the collected video frame, and can realize the hyper-real-environment AR rendering label under the mobile video scene, thereby achieving the real-time synchronization of the stream pushing end and the stream pulling end, and being beneficial to the real-time action command in the application scene. Since the SEI is not a necessary option for the decoding process, in other words, the SEI has no direct influence on the decoding process, and for a scene which does not need to achieve real-time synchronization of the shooting parameter information of the shooting device, the video is directly and normally played without decoding the shooting parameter information. In addition, the scheme can be free of network jitter, and no matter how jittered the network is, the scheme can ensure that the image data and the shooting parameter information reach the synchronization of the image sequence frame level.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart illustrating a method for transmitting video frames and camera parameter information according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for transmitting video frames and camera parameter information according to another embodiment of the present application;
FIG. 3 illustrates a device structure diagram of a transmission device according to one embodiment of the present application;
FIG. 4 illustrates a device structure diagram of a computer device according to an embodiment of the present application;
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technologies, compact Disc Read-Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the traditional transmission method of the video frame and the shooting parameter information, the algorithm data set is used for separately storing the video frame and the shooting parameter information, the video frame is original image data which is not subjected to video coding compression, the occupied storage space is large, meanwhile, the video frame and the shooting parameter information need to be synchronously time stamped, the process is complex, and the transmission efficiency is low. In addition, the conventional method also includes that synchronization reaching the image sequence frame level cannot be guaranteed through an additional signaling channel (e.g., websocket, message Queuing Telemetry Transport (MQTT), etc.), time consumption also exists in network transmission due to time required for video encoding and decoding, the additional signaling channel can always reach a receiving end before a video frame, and delay fluctuation of signaling data is also uncontrollably unstable due to network fluctuation and unstable video encoding and decoding time. In addition, the conventional method also includes defining a new protocol by itself based on H264/h.265 extra package data packets (packets), but for a scene without camera parameter information, a player on the market cannot normally pull and play video, and the compatibility of the method is zero.
The application provides a system method for transmitting video frames and shooting parameter information, which is applied to a system consisting of transmission equipment and computer equipment, and comprises the following steps:
the method comprises the steps that a transmission device obtains a plurality of video frames shot by a corresponding camera device and at least one piece of camera shooting parameter information when the video frames are shot; generating a coding sequence according to the plurality of video frames and the at least one shooting parameter information, wherein the coding sequence comprises a plurality of video frame coding units and at least one shooting parameter coding unit, and each shooting parameter coding unit comprises supplementary enhancement information used for indicating corresponding shooting parameter information; and transmitting the encoded sequence to a computer device;
and the computer equipment receives the coding sequence of the plurality of video frames which are sent by the transmission equipment and shot by the camera device, decodes the coding sequence, and acquires and displays the corresponding plurality of video frames through the corresponding display device.
The transmission device includes, but is not limited to, a stream pushing device that encodes and transmits video frames and shooting parameter information, and the transmission device may be provided with an internal camera device or establish a connection with an external camera device, and may acquire a plurality of video frames shot by the camera device, and the specific transmission device may be, for example, an unmanned aerial vehicle ground control device, augmented reality glasses, or other data processing devices. The corresponding computer device includes, but is not limited to, a stream pulling device for decoding and presenting the video frame, and in some cases, the computer device may further superimpose virtual information and the like in the video frame based on the camera parameter information, such as a mobile phone, a computer, a tablet computer, or other augmented reality glasses. The corresponding video frames and at least one piece of shooting parameter information are coded into a coding sequence at a transmission equipment end and transmitted to the computer equipment through a communication line, the computer equipment obtains the coding sequence and then decodes to obtain and present the corresponding video frames, and furthermore, the computer equipment can also obtain the shooting parameter information by decoding and superimpose and present virtual information and the like in the video frames based on the shooting parameter information.
Referring to fig. 1, a method for transmitting video frames and imaging parameter information is shown, which is applied to a transmission apparatus connected with an imaging device, wherein the method comprises step S101, step S102 and step S103. In step S101, acquiring a plurality of video frames captured by the imaging device and at least one piece of imaging parameter information when the plurality of video frames are captured; in step S102, generating an encoding sequence according to the plurality of video frames and the at least one image pickup parameter information, where the encoding sequence includes a plurality of video frame encoding units and at least one image pickup parameter encoding unit, and each image pickup parameter encoding unit includes supplemental enhancement information for indicating corresponding image pickup parameter information; in step S103, the encoded sequence is transmitted to a computer device.
Specifically, in step S101, a plurality of video frames captured by the imaging device and at least one imaging parameter information at the time when the plurality of video frames are captured are acquired. For example, the camera device may be a camera device built in the transmission device, the transmission device may issue a corresponding image acquisition instruction to the camera device, the corresponding image acquisition instruction is used to start the camera device to acquire a plurality of corresponding video frames, the corresponding camera device acquires the plurality of video frames in the current field of view in real time based on the corresponding acquisition instruction, and the transmission device may further acquire camera parameter information when the camera device acquires an image, such as acquiring internal parameters and external parameters of the camera device, where the external parameters include a position, an angle, and the like of the camera device. The camera device can also be an external camera device of the transmission equipment, such as an external camera or a camera device of other equipment, and the corresponding external camera device is used for acquiring images according to user instructions, shooting a plurality of video frames of the current visual field through the camera, and simultaneously, the corresponding camera device can also acquire camera parameter information during image acquisition, such as reading internal parameters and external parameters of the camera device; the camera device sends a plurality of video frames and corresponding camera parameter information to the transmission equipment through communication connection with the transmission equipment. The plurality of video frames include a plurality of video frames to be transmitted in a video stream captured by the camera device, which may be all captured video frames or video frames filtered according to a preset rule, where the preset rule includes, but is not limited to, a preset network quality, a preset time interval, a preset frame number, a picture change from a preceding video frame exceeding a threshold, and the like, and is not limited herein. The image capturing parameter information includes indication information regarding internal and external references of a corresponding video frame (e.g., one or more video frames, etc.) among a plurality of video frames captured by the image capturing apparatus.
For example, in some embodiments, the imaging parameter information includes internal reference information and imaging pose information of the imaging device. For example, in order to simplify the amount of calculation and reduce the amount of data in the data transmission process, the transmission apparatus does not directly transmit a coordinate transformation matrix or the like corresponding to a video frame captured by the imaging device, but transmits internal reference and imaging pose information for calculating the coordinate transformation matrix. The internal parameters include, but are not limited to, a long focal length of the camera device without zooming, a short focal length, a long axis optical center offset, a short axis optical center offset, a relative zoom size of the camera device, a video image sequence width, and the like. The imaging pose information includes imaging position information and imaging angle information of the imaging device, and the imaging position information is usually expressed by geographical position information, as represented by geographical coordinates (B, L, H), where L represents longitude information, B represents latitude information, H represents altitude (e.g., altitude, etc.), and the like. The longitude and latitude information may be obtained through a sensor such as a GPS, the corresponding height information may be obtained through a Digital Elevation Model (DEM) or other manners, and the obtaining manner of the geographic coordinates (B, L, H) is only an example, and is not limited herein. The shooting angle information comprises yaw/pitch/roll triaxial angle information, yaw is a yaw angle, pitch is a pitch angle, roll is a roll angle and the like, and the shooting angle information is acquired by a corresponding holder or an attitude sensor (such as a triaxial gyroscope and the like).
Here, the number of the at least one piece of imaging parameter information may be one or more, and accordingly, if the number of the imaging parameter information is one, the imaging parameter information when the plurality of video frames are captured is the same, or the imaging parameter information is an imaging parameter screened from a plurality of candidate imaging parameter information obtained when the plurality of video frames are captured. When the number of the at least one piece of photographing parameter information is multiple, the plurality of pieces of photographing parameter information may correspond to each of the plurality of video frames one to one, for example, photographing parameter information of a photographing apparatus is recorded once when each video frame is acquired; in some cases, the plurality of pieces of imaging parameter information may be imaging parameters or the like representing video frames in the plurality of video frames, and do not correspond to the plurality of video frames one to one. Here, the plurality of pieces of imaging parameter information may be selected from a plurality of pieces of candidate imaging parameter information according to a preset condition, and the number of the plurality of pieces of candidate imaging parameter information is not limited, and may be greater than or equal to the number of frames of the plurality of video frames. In some embodiments, in step S101, a plurality of video frames captured by the imaging device are acquired; acquiring a plurality of candidate image pickup parameter information when the plurality of video frames are shot, and determining at least one image pickup parameter information meeting a preset condition from the plurality of candidate image pickup parameter information. For example, the transmission device acquires a plurality of video frames captured by the camera device, and acquires a plurality of candidate camera parameter information when the plurality of video frames are captured, where the plurality of candidate camera parameter information may be in one-to-one correspondence with the plurality of video frames, such as recording the candidate camera parameter information of the corresponding camera device once when each video frame is captured, or the plurality of candidate camera parameter information is a plurality of candidate camera parameter information captured based on a preset time interval, where the corresponding preset time interval may be greater than or equal to a capture interval of the video frames, or may be less than the capture interval of the video frames, and the like. After acquiring the plurality of candidate image pickup parameter information, the transmission device screens out at least one image pickup parameter information based on preset conditions, specifically, as in some embodiments, the preset conditions include but are not limited to: if the acquisition time of some candidate camera shooting parameter information is the same as the shooting time of one of the video frames, determining the some candidate camera shooting parameter information as camera shooting parameter information; sampling from the candidate image pickup parameter information according to a preset interval, so as to determine corresponding image pickup parameter information; and if the candidate camera shooting parameter information is different from the preorder candidate camera shooting parameter information of the candidate camera shooting parameter information, determining the candidate camera shooting parameter information as the camera shooting parameter information. For example, according to the shooting time of each video frame, some candidate shooting parameter information, of which the shooting time of a certain candidate shooting parameter is the same as the shooting time of a video frame of one of the plurality of video frames, is determined and determined as the shooting parameter information of the corresponding video frame. For example, when the number of candidate image capturing parameter information is not limited, in order to save transmission resources and improve transmission efficiency, it may be determined that a change of image capturing parameter information of video frames between consecutive frames is small, and a change trend of the image capturing parameter information in the real-time video stream and the like can be described only by intermittent image capturing parameter information, and then the transmission device determines image capturing parameter information meeting conditions, such as 1 st, 6 th, and 11 th image capturing parameter information, from the plurality of candidate image capturing parameter information according to a preset interval (for example, a preset time interval, a preset number interval, or a frame number interval of a preset video frame and the like), from the plurality of candidate image capturing parameter information. For another example, in order to improve the sensitivity of the change of the imaging parameter information, save transmission resources, and improve transmission efficiency, the transmission of the same candidate imaging parameter information may be omitted, and if the candidate imaging parameter information corresponding to a plurality of consecutive time points or a plurality of consecutive video frames is the same, the transmission apparatus may determine one (e.g., the first candidate imaging parameter information) of the candidate imaging parameter information corresponding to the plurality of consecutive time points or the plurality of consecutive video frames as the imaging parameter information, so that when a certain candidate imaging parameter information is different from the preceding candidate imaging parameter information of the certain candidate imaging parameter information, the certain candidate imaging parameter information is determined as the imaging parameter information. The candidate imaging parameter information is the same, which means that the parameter difference between the candidate imaging parameter information and the preceding candidate imaging parameter information of the candidate imaging parameter information (for example, candidate imaging parameter information corresponding to the first N or the first N video frames, etc.) satisfies the following condition: the corresponding internal parameter change is less than or equal to a preset change threshold, the position difference value corresponding to the shooting position information is less than or equal to a position difference value threshold, and the angle difference value of the shooting angle information is less than or equal to an angle difference value threshold.
In step S102, an encoding sequence is generated according to the plurality of video frames and the at least one piece of shooting parameter information, where the encoding sequence includes a plurality of video frame encoding units and at least one shooting parameter encoding unit, and each shooting parameter encoding unit includes supplemental enhancement information for indicating corresponding shooting parameter information. For example, after acquiring a plurality of video frames and at least one piece of shooting parameter information, the transmission device generates a plurality of video frame coding units and at least one shooting parameter coding Unit, where the corresponding video frame coding units and the shooting parameter coding units are both NAL coding units (such as NAL units), and each of the corresponding video frame coding units and the shooting parameter coding units is composed of a corresponding Unit Header (such as NAL Header) and a Unit main body (such as NALU payload), and the corresponding Unit Header is used to indicate a content type of the Unit main body of the NAL coding Unit, and specifically, the composition of the Unit Header of the coding Unit includes: forbidden _ zero _ bit (1 bit) + NAL _ ref _ idc (2 bit) + NAL _ unit _ type (5 bit), wherein forbidden _ zero _ bit is a forbidden bit and is initially 0, and when a network finds that a NAL coding unit has a bit error, the bit can be set to 1 so that a receiving party can correct the error or lose the unit; wherein, NAL importance indication is in NAL _ ref _ idc, which marks the importance of the NAL coding unit, and the larger the value is, the more important the NAL coding unit is, when the decoding process is not over, the NALU with 0 importance can be discarded by the decoder; the NAL _ unit _ type is used to indicate a content type of a unit main body of the NAL coding unit, and if the type value is 6, the corresponding unit main body is Supplemental Enhancement Information (SEI), if the type value is 5, the corresponding unit main body is a slice of an IDR picture, and if the type value is other values, the corresponding unit main body is a type corresponding to the value of the NAL _ unit _ type. The corresponding video frame coding unit is determined by coding one or more video frames in the corresponding plurality of video frames, and the corresponding shooting parameter coding unit is determined by coding one piece of shooting parameter information in the at least one piece of shooting parameter information. After the transmission equipment determines the video frame coding units and the shooting parameter coding unit, the corresponding coding sequence is generated according to the video frame coding units and the shooting parameter coding unit. Optionally, the coding sequence may include other NAL coding units besides the plurality of video frame coding units and the at least one camera parameter coding unit, where content types of unit bodies in the other NAL coding units are different from content types of unit bodies in the video frame coding units and the camera parameter coding units.
In some embodiments, in step S102, a video coding technique is used to generate a plurality of corresponding video frame coding units and at least one shooting parameter coding unit according to the plurality of video frames and the at least one shooting parameter information; and generating a corresponding coding sequence according to the plurality of video frame coding units and the at least one shooting parameter coding unit. For example, the video coding techniques include, but are not limited to, video coding techniques corresponding to h.264, h.265, and the like standards. The corresponding shooting parameter coding unit and the video frame coding unit are data packets of the same level and belong to the code stream category. For the image pickup parameter coding unit, the value of nal _ unit _ type in the unit header is 6, the content type of the unit body corresponding to the image pickup parameter coding unit is SEI, and the transmission device encapsulates and fills the corresponding image pickup parameter information into the SEI, for example, the image pickup parameters are stored in the supplemental enhancement information in an arbitrary sequential or deserialization manner, where the sequential manner includes a JSON string manner, etc., and in some embodiments, the image pickup parameter information is stored in the supplemental enhancement information in the form of a JSON string. For example, the JSON data structure includes a set of "key/value" pairs (A collection of name/value pairs). In different languages, objects (objects), records (Records), structures (structs), dictionaries (dictionary), hash tables (hash tables), keyed tables (keyed lists), or associative arrays (associative arrays) are understood. The JSON data structure also includes An ordered list of values, understood in most languages as data (array). The concrete representation is that by bracketing strings with double quotation marks, and nesting arrays and objects in the array, for example,
"imaging parameter information": [
{
The long focal length is AA,
the short focal length is BB,
“roll”:XX
}]。
in some embodiments, the generating a corresponding encoded sequence from the plurality of video frame encoding units and the at least one camera parameter encoding unit includes: and arranging the video frame coding units and the at least one shooting parameter coding unit to form a coding sequence according to the acquisition sequence of the video frames. For example, after acquiring a plurality of corresponding video coding units and at least one image capturing parameter coding unit, the transmission apparatus transmits the video coding units according to a corresponding video frame acquisition order, so as to generate a corresponding coded sequence, where an arrangement order of each image capturing parameter coding unit is adjacent to (e.g., adjacent to) a transmission position of the video frame coding unit corresponding to the image capturing parameter coding unit, and each coding unit is separated by a corresponding separator, such as a Start code:00 00 00/00 01, etc.
In step S103, the encoded sequence is transmitted to a computer device. For example, after the transmission device acquires the corresponding coding sequence, the transmission device transmits the coding sequence to the computer device based on the communication connection with the computer device, so that the computer device can present the video frame in real time. In some cases, the computer device may further decode the encoded sequence to obtain corresponding imaging parameter information, and superimpose and present virtual information and the like in the video frame in real time, such as highlight rendering or superimposing other information (e.g., text, image, or video about an object in an image) and the like.
In some embodiments, the supplemental enhancement information includes corresponding byte length information and byte parameter information, wherein the byte parameter information is used for filling in the corresponding image pickup parameter information, and the byte length information is used for indicating the byte length of the byte parameter information. For example, the corresponding camera parameter coding unit includes a corresponding unit header and a unit body, where the unit header includes NAL _ unit _ type for indicating a content type of the unit body in the NAL coding unit, and if the type value is 6, the corresponding unit body is the supplemental enhancement information. The supplemental enhancement information corresponding unit body includes corresponding byte length information and byte parameter information, the corresponding byte length information indicating a byte length of the byte parameter information, such as SEI payload size, and the corresponding byte parameter information indicating image pickup parameter information, such as SEI payload content. The byte length information is used for counting the byte length from the first byte to the end byte (wherein, the end byte may or may not include an end character) of the byte parameter information, for example, the byte length from the first byte to the end byte is the foregoing byte length information, or the byte length information is used for counting the byte length from the first byte to the end byte after the byte parameter information is subjected to contention prevention processing; in some cases, if the byte length information exceeds 255, adding one byte represents the byte length information, and so on. In some cases, the supplemental enhancement information corresponding unit body further includes a body coding specification type, such as SEI payload type, etc., byte identification information, such as SEI payload uuid, etc., and an RBSP tail padding byte, such as RBSP trailing bits value of 80, etc., and in some embodiments, the supplemental enhancement information further includes corresponding byte identification information, and the byte length information is used to indicate the byte lengths of the byte identification information and the byte parameter information. For example, the type of the body coding specification of the unit body corresponding to the supplemental enhancement information may also be indicated by 05, if the value of the SEI payload type is 05, the unit body corresponding to the supplemental enhancement information further includes corresponding byte identification information, such as SEI payload uuid, and the corresponding byte length information is a total byte length corresponding to a first byte counted from the byte identification information to an end byte of the byte parameter information, or the byte length information is used to count a total byte length corresponding to a first byte to an end byte after performing contention prevention processing on the byte identification information and the byte parameter information. Here, specific contents in the corresponding unit main body are as follows: NALU payload = SEI payload type + SEI payload size + SEI payload uuid + SEI payload content + rbsp trailing bits. The byte length of the byte identification information before the anti-contention operation is a fixed value, usually 16 bytes, and the content thereof is randomly generated.
In some embodiments, the method further includes step S104 (not shown), in step S104, performing contention prevention processing on the byte identification information and the byte parameter information, and inserting a preset character into each byte of the byte identification information and the byte parameter information to obtain contention prevention processed byte identification information and byte parameter information; and determining corresponding byte length information by counting the byte identification information and the byte parameter information after the anti-competition processing. For example, if a corresponding coding unit is divided by a separator 0x000001 or 0x000000 and a corresponding sequence occurs inside the coding unit, the coding unit is also divided, which results in incomplete contents of the corresponding coding unit, and thus, it is necessary to perform a contention prevention process on the contents of the coding unit, such as byte identification information and byte parameter information in a unit body. Specifically, according to a preset character (for example, a default preset character of a system or a preset character input by a user, such as 03, etc.), the preset character is inserted into each byte of the byte identification information and the byte parameter information, which is specifically exemplified as follows:
0x000000------------->0x00000300
0x000001-------------->0x00000301
0x000002-------------->0x00000302
0x000003-------------->0x00000303
the anti-contention process described above is to add 03 after 0000 in each byte. After determining the byte identification information and the byte parameter information after the anti-contention processing, the transmission device counts the total byte length corresponding to the byte identification information and the byte parameter information after the anti-contention processing from the first byte to the end byte to determine the corresponding byte length information.
Of course, those skilled in the art will appreciate that the above-described anti-competition processes are merely exemplary, and that other existing or future anti-competition processes, as may be applicable to the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In some embodiments, the method further comprises a step S105 (not shown), in which step S105, the plurality of video frames are color space converted, and a plurality of converted video frames conforming to a preset color space are determined; in step S102, an encoding sequence is generated according to the plurality of converted video frames and the at least one image capturing parameter information, where the encoding sequence includes a plurality of video frame encoding units and at least one image capturing parameter encoding unit, and each image capturing parameter encoding unit includes supplemental enhancement information indicating corresponding image capturing parameter information. For example, the color space of a plurality of video frames acquired by the image capturing device is usually an RGB color space, and in order to better transmit or when the streaming media does not support RGB images, it is necessary to perform color space conversion on the plurality of video frames, determine that the video frames conform to a preset color space (for example, YUV, CMY, HSV, HSI color space, etc.), determine a plurality of converted video frames after the color space conversion, and perform subsequent encoding and transmission based on the plurality of converted video frames. Of course, those skilled in the art will appreciate that the above color spaces are merely exemplary, and that other existing or future color spaces, as may be suitable for use in the present application, are also intended to be encompassed by the present disclosure and are hereby incorporated by reference.
Fig. 2 shows a method for transmitting video frames and camera parameter information according to another aspect of the present application, applied to a computer device including a display device, wherein the method includes steps S201 and S202. In step S201, receiving an encoding sequence of a plurality of video frames sent by a corresponding transmission device and captured by a camera device, where the encoding sequence includes a plurality of video frame encoding units and at least one camera parameter encoding unit, and each camera parameter encoding unit includes supplemental enhancement information for indicating corresponding camera parameter information; in step S202, the encoded sequence is decoded, and a plurality of corresponding video frames are obtained and presented by the display device.
Specifically, in step S201, an encoding sequence of a plurality of video frames captured by a camera device and sent by a corresponding transmission device is received, where the encoding sequence includes a plurality of video frame encoding units and at least one camera parameter encoding unit, and each camera parameter encoding unit includes supplemental enhancement information for indicating corresponding camera parameter information. For example, the computer device receives an encoding sequence determined by the transmission device based on a plurality of video frames shot by the camera device and at least one piece of camera parameter information, wherein the generation process of the encoding sequence is as described above and is not described herein again.
In step S202, the encoded sequence is decoded, and a plurality of corresponding video frames are obtained and presented by the display device. For example, the computer device includes a corresponding display device for displaying the encoded multiple video frames, such as a display screen or a projector, and the display device may be a display device built in the computer device or a display device externally connected to the computer device. After the computer equipment acquires the coding sequence, the coding sequence is decoded based on a corresponding decoding technology, the corresponding video frame encoding units are decoded to acquire a plurality of corresponding video frames, and the corresponding at least one shooting parameter encoding unit determines whether decoding is needed according to requirements. As in some embodiments, in step S202, based on the preset, ignoring the at least one imaging parameter encoding unit, only the plurality of video frame encoding units are decoded to acquire and present the corresponding plurality of video frames through the display device. For example, the scheme can realize real-time synchronous transmission of video frames and shooting parameter information, and since SEI is an unnecessary option in a decoding stage, we can determine whether to decode a current shooting parameter coding unit according to a current video presentation requirement, and if default setting of corresponding application is only to present video, for example, virtual information does not need to be superimposed on the video, or corresponding application functions include a selection function related to shooting parameter information decoding, decoding of shooting parameter information can be unselected in the application based on user operation, and only a video frame coding unit is decoded and a plurality of video frames are presented, so that computing resources are saved, and video frame decoding efficiency can be improved. The computer device may directly skip the coding unit whose content type of the unit body is SEI among NAL coding units, ignore its decoding process, decode other coding units and acquire frame data corresponding to a plurality of video frames, and present them through a display device, based on a preset setting (for example, applying a default setting or a user's selection setting on image capturing parameter information, etc.).
In some embodiments, in step S202, the at least one imaging parameter encoding unit is decoded to obtain corresponding imaging parameter information; decoding the video frame coding units to obtain and display a plurality of corresponding video frames through the display device; wherein the method further comprises a step S203 (not shown), in which step S203, coordinate transformation information corresponding to a world coordinate system to a pixel coordinate system of the image pickup device is determined according to the image pickup parameter information; and performing superposition presentation of virtual information in the plurality of video frames according to the coordinate transformation information. For example, the transmission device encapsulates the camera parameter information into a coding unit with an SEI content type of a unit main body through communication connection, and codes video frames into a coding sequence corresponding to video streams to be transmitted to the computer device, after the computer device receives the video streams, the coding sequence of the real-time video streams is analyzed into image data of a plurality of video frames and the SEI coding unit through the video decoding module, camera internal reference and camera pose information is obtained from the SEI coding unit, and then a geographic tag (for example, geographic position information and/or map position information input by a user, a certain object identified and determined by an object and corresponding geographic position information and/or map position information read are calculated through the camera internal reference and camera pose information, or the geographic tag information is directly called to determine the corresponding geographic position information and/or map position information) at a screen coordinate position, so as to render all geographic tags in a screen. In some embodiments, the computer device aligns coordinates of the camera in the physical world coordinate system in the virtual world coordinate system by camera pose information in the SEI coding unit, the geotag transforms object coordinates into virtual world coordinates by the model matrix, the view matrix transforms virtual world coordinates into camera coordinates, aligns camera cropping coordinates (align screen/video coordinates) by the camera in the SEI coding unit internal reference into projection matrix, and the like.
In some embodiments, the supplemental enhancement information includes corresponding byte length information, and byte identification information and byte parameter information after contention prevention processing, where the byte parameter information is used to fill in corresponding image capture parameter information, and the byte length information is used to indicate the byte length of the byte identification information and the byte parameter information after contention prevention processing; wherein the decoding the at least one shooting parameter encoding unit to acquire corresponding shooting parameter information further comprises: decoding the at least one shooting parameter coding unit to obtain corresponding at least one piece of supplemental enhancement information, and reading byte length information, byte identification information after anti-competition processing and byte parameter information in the supplemental enhancement information; performing competition-preventing processing on the byte identification information and the byte parameter information subjected to the competition-preventing processing to obtain real byte identification information and real byte parameter information subjected to the competition-preventing processing, and determining real byte length information subjected to the competition-preventing processing, wherein the real byte length information is used for indicating the byte lengths of the real byte identification information and the real byte parameter information; and reading corresponding shooting parameter information from the real byte parameter information according to the real byte length information and the real byte identification information, thereby determining at least one corresponding shooting parameter information. For example, based on the foregoing anti-contention processing, the corresponding byte length information in the supplemental enhancement information is used to indicate the total byte length of the byte identification information and the byte parameter information after the anti-contention processing, and the byte identification information and the byte parameter information after the anti-contention processing can be determined from the video stream according to the byte length information, when decoding, the computer device needs to perform anti-contention processing on the byte identification information and the byte parameter information after the anti-contention processing first, and obtain the real byte identification information and the real byte parameter information before the anti-contention processing is not performed, for example, according to the preset characters, the corresponding characters are removed from the byte identification information (e.g., SEI payload and the like) and the byte parameter information (e.g., SEI payload and the like) in the supplemental enhancement information at specific positions. The computer equipment carries out anti-competition processing on the byte identification information and the byte parameter information, then determines the real byte identification information and the real byte parameter information, counts the total byte length of the real byte identification information and the byte parameter information so as to determine the corresponding real byte length information, and then determines the real byte parameter according to the real byte identification information, such as the byte length of the real byte identification information, so as to obtain the shooting parameter information. For example, assume that the corresponding real byte length information is 47 bytes; the computer equipment subtracts the known byte length 16 bytes of the real byte identification information according to the real byte length information, and determines the byte length corresponding to the real byte parameter information, namely 47-16=31 bytes, so that the real byte parameter information, namely the shooting parameter information can be obtained from the real byte identification information and the real byte parameter information after the anti-contention processing. In some embodiments, the byte length of the real byte parameter information includes the length of an end symbol, for example, if the byte length of the real byte parameter information minus one byte length corresponds to the byte length of the image pickup parameter information because the byte parameter information is a character string and the character string in the C language has an end symbol, and therefore has an end symbol of 0x00, the image pickup parameter information including 30 bytes can be read from the real byte parameter information.
In some embodiments, the method further comprises a step S204 (not shown) of determining and storing, in step S204, an encapsulation file for the plurality of video frames from the encoded sequence; in step S202, if the presentation operation about the package file is obtained, the coding sequence in the package file is decoded, and a plurality of corresponding video frames are obtained and presented through the display device. For example, after receiving the corresponding coding sequence, the computer device may directly decode and perform real-time video stream presentation, or may generate a corresponding packaged file, where in some embodiments, the packaged file refers to a video file, such as a video file packaged into an MP4, MOV, MPG, WMV, or the like, and is stored in a database for subsequent video playback invocation, such as for pre-arranged exercise, review of a copy, or as a data set optimization coordinate transformation algorithm, or the like. If the computer device subsequently acquires a presentation operation related to the encapsulated file, the computer device calls the encapsulated file from the database, decodes the corresponding coding sequence, and presents the corresponding multiple video frames, wherein the presentation operation may be based on a user operation or intelligent recognition and the like. In some cases, when the computer device calls and presents the package file in the subsequent time, the coding sequence of the shooting parameter coding unit in the package file is the same as that of the corresponding video frame coding unit, and accordingly, when the corresponding video frame is presented, if the shooting parameter coding unit needs to be decoded, the synchronization of the video frame level between the shooting parameter information obtained by decoding and the presented video frame can still be realized, on one hand, the rendering of the virtual content in the scene can be restored, and the method can be used for scenes such as pre-arranged exercise, review and the like, on the other hand, the scene can be restored, and the scene video and the shooting parameter do not need to be obtained again every time the execution/test algorithm is executed, so that the coordinate transformation algorithm is optimized.
Embodiments of a method for transmitting video frames and camera parameter information according to an aspect of the present application are mainly described above, and specific apparatuses capable of implementing the embodiments are provided, and will be described below with reference to fig. 3 and 4.
Referring to fig. 3, a transmission apparatus for transmitting video frames and camera parameter information is shown, the transmission apparatus being connected to a camera, wherein the transmission apparatus includes a one-to-one module 101, a two-to-two module 102, and a three-to-one module 103. A one-to-one module 101, configured to obtain a plurality of video frames captured by the camera device and at least one piece of camera parameter information when the plurality of video frames are captured; a second module 102, configured to generate an encoding sequence according to the multiple video frames and the at least one image pickup parameter information, where the encoding sequence includes multiple video frame encoding units and at least one image pickup parameter encoding unit, and each image pickup parameter encoding unit includes supplemental enhancement information used for indicating corresponding image pickup parameter information; a triple module 103 for transmitting the code sequence to a computer device.
In some embodiments, the imaging parameter information includes internal reference information and imaging pose information of the imaging device.
In some embodiments, a module 101 is configured to obtain a plurality of video frames captured by the camera; acquiring a plurality of candidate image pickup parameter information when the plurality of video frames are photographed, and determining at least one image pickup parameter information satisfying a preset condition from the plurality of candidate image pickup parameter information. In some embodiments, the preset conditions include, but are not limited to: if the acquisition time of some candidate camera shooting parameter information is the same as the shooting time of one of the video frames, determining the some candidate camera shooting parameter information as camera shooting parameter information; sampling from the candidate image pickup parameter information according to a preset interval, so as to determine corresponding image pickup parameter information; and if the candidate camera shooting parameter information is different from the preorder candidate camera shooting parameter information of the candidate camera shooting parameter information, determining the candidate camera shooting parameter information as the camera shooting parameter information.
In some embodiments, a second module 102 is configured to generate, by using a video coding technology, a plurality of corresponding video frame coding units and at least one shooting parameter coding unit according to the plurality of video frames and the at least one shooting parameter information; and generating a corresponding coding sequence according to the plurality of video frame coding units and the at least one shooting parameter coding unit. For example, the high video coding techniques include, but are not limited to, video coding techniques corresponding to h.264, h.265, and the like standards. In some embodiments, the camera parameter information is stored in the supplemental enhancement information in the form of a JSON string.
In some embodiments, the generating a corresponding encoded sequence from the plurality of video frame encoding units and the at least one camera parameter encoding unit includes: and arranging the video frame coding units and the at least one shooting parameter coding unit to form a coding sequence according to the acquisition sequence of the video frames.
In some embodiments, the supplemental enhancement information includes corresponding byte length information and byte parameter information, wherein the byte parameter information is used for filling in the corresponding image pickup parameter information, and the byte length information is used for indicating the byte length of the byte parameter information. In some embodiments, the supplemental enhancement information further includes corresponding byte identification information, and the byte length information is used to indicate the byte length of the byte identification information and the byte parameter information.
In some embodiments, the transmission device further includes a fourth module (not shown) configured to perform contention avoidance processing on the byte identification information and the byte parameter information, and insert a preset character into each byte of the byte identification information and the byte parameter information to obtain the contention avoidance processed byte identification information and byte parameter information; and determining corresponding byte length information by counting the byte identification information and the byte parameter information after the anti-competition processing.
In some embodiments, the transmission device further comprises a fifth module (not shown) for performing color space conversion on the plurality of video frames, and determining a plurality of converted video frames conforming to a preset color space; the second module 102 is configured to generate an encoding sequence according to the plurality of converted video frames and the at least one shooting parameter information, where the encoding sequence includes a plurality of video frame encoding units and at least one shooting parameter encoding unit, and each shooting parameter encoding unit includes supplemental enhancement information used for indicating corresponding shooting parameter information.
Here, the specific implementation corresponding to the one-to-one module 101, the two-to-one module 102, the three-to-one module 103, the four-to-one module and the five-to-one module is the same as or similar to the embodiments of the step S101, the step S102, the step S103, the step S104 and the step S105, and therefore, the detailed description is omitted and is included herein by reference.
Fig. 4 shows a computer device for transmitting video frames and camera parameter information according to another aspect of the present application, the computer device includes a display device, wherein the computer device includes a first module 201 and a second module 202. A first module 201, configured to receive a coding sequence of multiple video frames sent by a corresponding transmission device and captured by a camera device, where the coding sequence includes multiple video frame coding units and at least one camera parameter coding unit, and each camera parameter coding unit includes supplemental enhancement information used for indicating corresponding camera parameter information; a second-two module 202, configured to decode the coding sequence, obtain and present a plurality of corresponding video frames through the display device.
In some embodiments, the second module 202 is configured to ignore the at least one camera parameter encoding unit based on a preset setting, and decode only the plurality of video frame encoding units to obtain and present a plurality of corresponding video frames through the display device.
In some embodiments, the second module 202 is configured to decode the at least one imaging parameter encoding unit to obtain corresponding imaging parameter information; decoding the video frame coding units to obtain and display a plurality of corresponding video frames through the display device; the computer equipment further comprises two or three modules (not shown) for determining coordinate transformation information corresponding to a world coordinate system to a pixel coordinate system of the camera device according to the camera parameter information; and performing superposition presentation of virtual information in the plurality of video frames according to the coordinate transformation information. In some embodiments, the supplemental enhancement information includes corresponding byte length information, and byte identification information and byte parameter information after contention prevention processing, where the byte parameter information is used to fill in corresponding image capture parameter information, and the byte length information is used to indicate the byte length of the byte identification information and the byte parameter information after contention prevention processing; wherein the decoding the at least one shooting parameter encoding unit to acquire corresponding shooting parameter information further comprises: decoding the at least one shooting parameter coding unit to obtain corresponding at least one piece of supplemental enhancement information, and reading byte length information, byte identification information after anti-competition processing and byte parameter information in the supplemental enhancement information; performing competition-preventing processing on the byte identification information and the byte parameter information subjected to the competition-preventing processing, acquiring real byte identification information and real byte parameter information subjected to the competition-preventing processing, and determining real byte length information subjected to the competition-preventing processing, wherein the real byte length information is used for indicating the byte lengths of the real byte identification information and the real byte parameter information; and reading corresponding shooting parameter information from the real byte parameter information according to the real byte length information and the real byte identification information, thereby determining at least one corresponding shooting parameter information.
In some embodiments, the computer device further comprises a twenty-four module (not shown) for determining and storing an encapsulation file for the plurality of video frames from the encoded sequence; the second module 202 is configured to decode the coding sequence in the package file if the presentation operation on the package file is obtained, and obtain and present a plurality of corresponding video frames through the display device.
The specific implementation of the two-in-one module 201, the two-in-two module 202, the two-in-three module, and the two-in-four module is the same as or similar to the embodiments of the steps S201, S202, S203, and S204, and thus is not repeated herein and is included herein by way of reference.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the previous items.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 5, the system 300 can be implemented as any of the above-described devices in the various embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. As such, the software programs (including associated data structures) of the present application can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that are capable of storing computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (21)

1. A method for transmitting video frames and shooting parameter information is applied to transmission equipment, and the transmission equipment is connected with a shooting device, wherein the method comprises the following steps:
acquiring a plurality of video frames shot by the camera device and at least one piece of camera shooting parameter information when the video frames are shot;
generating a coding sequence according to the plurality of video frames and the at least one shooting parameter information, wherein the coding sequence comprises a plurality of video frame coding units and at least one shooting parameter coding unit, and each shooting parameter coding unit comprises supplementary enhancement information used for indicating corresponding shooting parameter information;
transmitting the encoded sequence to a computer device.
2. The method according to claim 1, wherein the acquiring a plurality of video frames taken by the camera and at least one camera parameter information when the plurality of video frames are taken comprises:
acquiring a plurality of video frames shot by the camera device;
acquiring a plurality of candidate image pickup parameter information when the plurality of video frames are shot, and determining at least one image pickup parameter information meeting a preset condition from the plurality of candidate image pickup parameter information.
3. The method of claim 2, wherein the preset condition comprises at least any one of:
if the acquisition time of some candidate camera shooting parameter information is the same as the shooting time of one of the video frames, determining the some candidate camera shooting parameter information as camera shooting parameter information;
sampling from the candidate image pickup parameter information according to a preset interval, so as to determine corresponding image pickup parameter information;
and if the candidate camera shooting parameter information is different from the preorder candidate camera shooting parameter information of the candidate camera shooting parameter information, determining the candidate camera shooting parameter information as the camera shooting parameter information.
4. The method of claim 1, wherein said generating an encoded sequence from said plurality of video frames and said at least one camera parameter information comprises:
generating a plurality of corresponding video frame coding units and at least one shooting parameter coding unit according to the plurality of video frames and the at least one shooting parameter information by utilizing a video coding technology;
and generating a corresponding coding sequence according to the plurality of video frame coding units and the at least one shooting parameter coding unit.
5. The method of claim 4, wherein said generating corresponding encoded sequences from said plurality of video frame encoding units and said at least one camera parameter encoding unit comprises:
and arranging the video frame coding units and the at least one shooting parameter coding unit to form a coding sequence according to the acquisition sequence of the video frames.
6. The method according to any one of claims 1 to 5, wherein the imaging parameter information includes internal reference information and imaging pose information of an imaging device.
7. The method according to any one of claims 1 to 6, wherein the imaging parameter information is stored in the supplemental enhancement information in the form of a JSON character string.
8. The method of any one of claims 1 to 7, wherein the supplemental enhancement information includes corresponding byte length information and byte parameter information, wherein the byte parameter information is used to fill in the corresponding camera parameter information, and the byte length information is used to indicate a byte length of the byte parameter information.
9. The method of claim 8, wherein the supplemental enhancement information further includes corresponding byte identification information, the byte length information indicating a byte length of the byte identification information and the byte parameter information.
10. The method of claim 8 or 9, wherein the method further comprises:
performing anti-competition processing on the byte identification information and the byte parameter information, and inserting a preset character into each byte of the byte identification information and the byte parameter information to obtain the byte identification information and the byte parameter information after the anti-competition processing;
and determining corresponding byte length information by counting the byte identification information and the byte parameter information after the anti-competition processing.
11. The method of claim 1, wherein the method further comprises:
performing color space conversion on the plurality of video frames, and determining a plurality of converted video frames which accord with a preset color space;
wherein, the generating an encoding sequence according to the plurality of video frames and the at least one shooting parameter information comprises:
and generating a coding sequence according to the plurality of converted video frames and the at least one shooting parameter information, wherein the coding sequence comprises a plurality of video frame coding units and at least one shooting parameter coding unit, and each shooting parameter coding unit comprises supplementary enhancement information used for indicating corresponding shooting parameter information.
12. A method for transmitting video frame and camera parameter information is applied to computer equipment, wherein the computer equipment comprises a display device, and the method comprises the following steps:
receiving a coding sequence of a plurality of video frames which are sent by corresponding transmission equipment and shot by a camera device, wherein the coding sequence comprises a plurality of video frame coding units and at least one camera parameter coding unit, and each camera parameter coding unit comprises supplementary enhancement information used for indicating corresponding camera parameter information;
and decoding the coded sequence, and acquiring and displaying a plurality of corresponding video frames through the display device.
13. The method of claim 12, wherein said decoding the encoded sequence, obtaining and presenting a corresponding plurality of video frames via the display device comprises:
and on the basis of preset, ignoring the at least one shooting parameter coding unit, and only decoding the plurality of video frame coding units to acquire and present a plurality of corresponding video frames through the display device.
14. The method of claim 12, wherein said decoding the encoded sequence, obtaining and presenting a corresponding plurality of video frames via the display device comprises:
decoding the at least one shooting parameter coding unit to acquire corresponding shooting parameter information;
decoding the video frame coding units to obtain and display a plurality of corresponding video frames through the display device;
wherein the method further comprises:
determining coordinate transformation information corresponding to a world coordinate system to a pixel coordinate system of the camera device according to the camera parameter information;
and performing superposition presentation of virtual information in the plurality of video frames according to the coordinate transformation information.
15. The method according to claim 14, wherein the supplemental enhancement information includes corresponding byte length information and byte identification information and byte parameter information after contention prevention processing, wherein the byte parameter information is used for filling in corresponding image pickup parameter information, and the byte length information is used for indicating byte lengths of the byte identification information and the byte parameter information after contention prevention processing; wherein the decoding the at least one shooting parameter encoding unit to acquire corresponding shooting parameter information further comprises:
decoding the at least one shooting parameter coding unit to obtain corresponding at least one piece of supplemental enhancement information, and reading byte length information, byte identification information after anti-competition processing and byte parameter information in the supplemental enhancement information;
performing competition-preventing processing on the byte identification information and the byte parameter information subjected to the competition-preventing processing, acquiring real byte identification information and real byte parameter information subjected to the competition-preventing processing, and determining real byte length information subjected to the competition-preventing processing, wherein the real byte length information is used for indicating the byte lengths of the real byte identification information and the real byte parameter information;
and reading corresponding shooting parameter information from the real byte parameter information according to the real byte length information and the real byte identification information, thereby determining at least one corresponding shooting parameter information.
16. The method of any of claims 12 to 14, wherein the method further comprises:
determining and storing an encapsulation file for the plurality of video frames according to the encoding sequence;
wherein the decoding the encoded sequence, obtaining and presenting a plurality of corresponding video frames via the display device comprises:
and if the presentation operation related to the encapsulated file is acquired, decoding the coding sequence in the encapsulated file, and acquiring and presenting a plurality of corresponding video frames through the display device.
17. A transmission apparatus that transmits video frames and imaging parameter information, the transmission apparatus being connected to an imaging device, wherein the apparatus comprises:
the one-to-one module is used for acquiring a plurality of video frames shot by the camera device and at least one piece of camera parameter information when the video frames are shot;
a second module, configured to generate a coding sequence according to the multiple video frames and the at least one shooting parameter information, where the coding sequence includes multiple video frame coding units and at least one shooting parameter coding unit, and each shooting parameter coding unit includes supplemental enhancement information used for indicating corresponding shooting parameter information;
and the three modules are used for transmitting the coding sequence to computer equipment.
18. A computer apparatus for transmitting video frames and camera parameter information, the computer apparatus comprising a display device, wherein the computer apparatus comprises:
the device comprises a first module, a second module and a third module, wherein the first module is used for receiving a coding sequence of a plurality of video frames which are sent by corresponding transmission equipment and shot by a camera device, the coding sequence comprises a plurality of video frame coding units and at least one camera parameter coding unit, and each camera parameter coding unit comprises supplementary enhancement information used for indicating corresponding camera parameter information;
and the second module is used for decoding the coding sequence, acquiring and displaying a plurality of corresponding video frames through the display device.
19. A computer device, wherein the device comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the steps of the method of any one of claims 1 to 16.
20. A computer-readable storage medium having stored thereon a computer program/instructions, characterized in that the computer program/instructions, when executed, cause a system to perform the steps of performing the method according to any one of claims 1 to 16.
21. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of any of claims 1 to 16.
CN202211481736.6A 2022-11-24 2022-11-24 Method and equipment for transmitting video frame and camera shooting parameter information Pending CN115866254A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211481736.6A CN115866254A (en) 2022-11-24 2022-11-24 Method and equipment for transmitting video frame and camera shooting parameter information
PCT/CN2023/121056 WO2024109317A1 (en) 2022-11-24 2023-09-25 Method and device for transmitting video frames and camera parameter information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211481736.6A CN115866254A (en) 2022-11-24 2022-11-24 Method and equipment for transmitting video frame and camera shooting parameter information

Publications (1)

Publication Number Publication Date
CN115866254A true CN115866254A (en) 2023-03-28

Family

ID=85665814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211481736.6A Pending CN115866254A (en) 2022-11-24 2022-11-24 Method and equipment for transmitting video frame and camera shooting parameter information

Country Status (2)

Country Link
CN (1) CN115866254A (en)
WO (1) WO2024109317A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024109317A1 (en) * 2022-11-24 2024-05-30 亮风台(上海)信息科技有限公司 Method and device for transmitting video frames and camera parameter information

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1358027A (en) * 2000-12-06 2002-07-10 Lg电子株式会社 Video data codec unit and method
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
CN101321279A (en) * 2007-06-05 2008-12-10 美国博通公司 Method and system for processing data
WO2012153450A1 (en) * 2011-05-11 2012-11-15 パナソニック株式会社 Video transmission device and video transmission method
CN104106264A (en) * 2012-02-17 2014-10-15 微软公司 Metadata assisted video decoding
CN104363430A (en) * 2014-12-04 2015-02-18 高新兴科技集团股份有限公司 Augmented reality camera monitoring method and system thereof
CN107924575A (en) * 2015-08-20 2018-04-17 微软技术许可有限责任公司 The asynchronous 3D annotations of video sequence
CN109982067A (en) * 2017-12-28 2019-07-05 浙江宇视科技有限公司 Method for processing video frequency and device
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN112422984A (en) * 2020-10-26 2021-02-26 眸芯科技(上海)有限公司 Code stream preprocessing device, system and method of multi-core decoding system
CN112533014A (en) * 2020-11-26 2021-03-19 Oppo广东移动通信有限公司 Target article information processing and displaying method, device and equipment in live video
CN113206971A (en) * 2021-04-13 2021-08-03 聚好看科技股份有限公司 Image processing method and display device
CN113345028A (en) * 2021-06-01 2021-09-03 亮风台(上海)信息科技有限公司 Method and equipment for determining target coordinate transformation information
CN115190237A (en) * 2022-06-20 2022-10-14 亮风台(上海)信息科技有限公司 Method and equipment for determining rotation angle information of bearing equipment
WO2022222656A1 (en) * 2021-04-20 2022-10-27 中兴通讯股份有限公司 Methods and apparatuses for processing code stream, terminal device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561230B (en) * 2013-09-27 2018-01-23 高新兴科技集团股份有限公司 A kind of video camera information processing equipment and its processing method
WO2020122361A1 (en) * 2018-12-12 2020-06-18 엘지전자 주식회사 Method for displaying 360-degree video including camera lens information, and device therefor
CN114326764A (en) * 2021-11-29 2022-04-12 上海岩易科技有限公司 Rtmp transmission-based smart forestry unmanned aerial vehicle fixed-point live broadcast method and unmanned aerial vehicle system
CN115866254A (en) * 2022-11-24 2023-03-28 亮风台(上海)信息科技有限公司 Method and equipment for transmitting video frame and camera shooting parameter information

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1358027A (en) * 2000-12-06 2002-07-10 Lg电子株式会社 Video data codec unit and method
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
CN101321279A (en) * 2007-06-05 2008-12-10 美国博通公司 Method and system for processing data
WO2012153450A1 (en) * 2011-05-11 2012-11-15 パナソニック株式会社 Video transmission device and video transmission method
CN104106264A (en) * 2012-02-17 2014-10-15 微软公司 Metadata assisted video decoding
CN104363430A (en) * 2014-12-04 2015-02-18 高新兴科技集团股份有限公司 Augmented reality camera monitoring method and system thereof
CN107924575A (en) * 2015-08-20 2018-04-17 微软技术许可有限责任公司 The asynchronous 3D annotations of video sequence
CN109982067A (en) * 2017-12-28 2019-07-05 浙江宇视科技有限公司 Method for processing video frequency and device
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN112422984A (en) * 2020-10-26 2021-02-26 眸芯科技(上海)有限公司 Code stream preprocessing device, system and method of multi-core decoding system
CN112533014A (en) * 2020-11-26 2021-03-19 Oppo广东移动通信有限公司 Target article information processing and displaying method, device and equipment in live video
CN113206971A (en) * 2021-04-13 2021-08-03 聚好看科技股份有限公司 Image processing method and display device
WO2022222656A1 (en) * 2021-04-20 2022-10-27 中兴通讯股份有限公司 Methods and apparatuses for processing code stream, terminal device, and storage medium
CN113345028A (en) * 2021-06-01 2021-09-03 亮风台(上海)信息科技有限公司 Method and equipment for determining target coordinate transformation information
CN115190237A (en) * 2022-06-20 2022-10-14 亮风台(上海)信息科技有限公司 Method and equipment for determining rotation angle information of bearing equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024109317A1 (en) * 2022-11-24 2024-05-30 亮风台(上海)信息科技有限公司 Method and device for transmitting video frames and camera parameter information

Also Published As

Publication number Publication date
WO2024109317A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
US20210337217A1 (en) Video analytics encoding for improved efficiency of video processing and compression
CN109840879B (en) Image rendering method and device, computer storage medium and terminal
US10771792B2 (en) Encoding data arrays
CN109587478B (en) Media information processing method and device
US11755271B2 (en) Stitching display system and image processing method of the same
CN111885346A (en) Picture code stream synthesis method, terminal, electronic device and storage medium
WO2024109317A1 (en) Method and device for transmitting video frames and camera parameter information
US11798195B2 (en) Method and apparatus for encoding and decoding three-dimensional scenes in and from a data stream
WO2018219202A1 (en) Method for presenting and packaging video image, and device for presenting and packaging video image
US11051080B2 (en) Method for improving video resolution and video quality, encoder, and decoder
CN110809169B (en) Internet comment information directional shielding system and method
US20230025664A1 (en) Data processing method and apparatus for immersive media, and computer-readable storage medium
US20220353459A1 (en) Systems and methods for signal transmission
CN116017060A (en) Vehicle image data processing method and device
CN112771878A (en) Method, client and server for processing media data
CN115136594A (en) Method and apparatus for enabling view designation for each atlas in immersive video
CN110876069A (en) Method, device and equipment for acquiring video screenshot and storage medium
US20230062933A1 (en) Data processing method, apparatus, and device for non-sequential point cloud media
CN109495793B (en) Bullet screen writing method, device, equipment and medium
US20230396808A1 (en) Method and apparatus for decoding point cloud media, and method and apparatus for encoding point cloud media
US20230046971A1 (en) Data processing method, apparatus, and device for point cloud media, and storage medium
WO2022037423A1 (en) Data processing method, apparatus and device for point cloud media, and medium
CN116684539A (en) Method and device for presenting augmented reality information in video stream
CN117176979B (en) Method, device, equipment and storage medium for extracting content frames of multi-source heterogeneous video
US20220030283A1 (en) Media data processing method, apparatus and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information