CN116708806A - Encoding method, decoding method and device applicable to spatial pictures - Google Patents

Encoding method, decoding method and device applicable to spatial pictures Download PDF

Info

Publication number
CN116708806A
CN116708806A CN202310484498.2A CN202310484498A CN116708806A CN 116708806 A CN116708806 A CN 116708806A CN 202310484498 A CN202310484498 A CN 202310484498A CN 116708806 A CN116708806 A CN 116708806A
Authority
CN
China
Prior art keywords
picture
target
space
spatial
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310484498.2A
Other languages
Chinese (zh)
Inventor
张夏杰
魏伟
郭景昊
杜峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202310484498.2A priority Critical patent/CN116708806A/en
Publication of CN116708806A publication Critical patent/CN116708806A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a coding method and device, a decoding method and device applicable to space pictures. One embodiment of the encoding method includes: generating a space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points according to the longitudes and latitudes corresponding to the sampling points under different space angles; dividing a space picture matrix by using a sub-matrix with a preset size to obtain a plurality of picture groups; and encoding the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate an encoded file. In the encoding process, the space pictures are reorganized to obtain the space picture matrix, and the picture groups in the space picture matrix are designated to manage the space pictures in the picture groups, so that the decoupling of the space pictures in time is realized, and the efficiency and the accuracy of the encoding process of the space pictures are improved.

Description

Encoding method, decoding method and device applicable to spatial pictures
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a picture coding and decoding technology, and especially relates to a coding method, a coding device, a decoding method, a decoding device, a computer readable medium and electronic equipment suitable for space pictures.
Background
The video coding and decoding technologies of the current mainstream are all based on time sequence, that is, the front and back frame data in the code stream are continuous in time, which is a very strong prior condition, meaning that the front and back frame information can be used in the coding process with confidence. The existing video coding technology can be applied to time sequence pictures, but cannot be applied to space pictures.
Disclosure of Invention
The embodiment of the application provides an encoding method, an encoding device, a decoding method, a decoding device, a computer readable medium and electronic equipment suitable for space pictures.
In a first aspect, an embodiment of the present application provides a coding method applicable to a spatial picture, including: generating a space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points according to the longitudes and latitudes corresponding to the sampling points under different space angles; dividing a space picture matrix by using a sub-matrix with a preset size to obtain a plurality of picture groups; and encoding the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate an encoded file.
In some examples, the generating a spatial image matrix including a plurality of spatial images acquired by a plurality of sampling points according to the longitude and latitude corresponding to the plurality of sampling points under different spatial angles includes: and arranging a plurality of space pictures by taking the longitude of the sampling point corresponding to the space picture as the horizontal axis and taking the latitude of the sampling point corresponding to the space picture as the vertical axis to generate a space picture matrix.
In some examples, the encoding the spatial picture in the spatial picture matrix according to the key frame and the predicted frame included in each of the plurality of picture groups to generate the encoded file includes: arranging the picture groups corresponding to sampling points with the same latitude in the plurality of picture groups according to the order of the longitude of the sampling points corresponding to the picture groups from small to large, and generating a plurality of picture group subsequences; arranging a plurality of picture group subsequences according to the order of the latitudes of sampling points corresponding to the picture groups from small to large, determining picture group identifiers corresponding to the picture groups, and generating a picture group sequence; according to the sequence of the key frames before the predicted frames, arranging the space pictures in each picture group in the picture group sequence, determining the picture identification of the space picture in each picture group in the plurality of picture groups, and generating a space picture sequence; and encoding the spatial picture sequence according to the key frames and the predicted frames included in each of the plurality of picture groups to generate an encoded file.
In some examples, the above-mentioned arranging the spatial pictures in each of the picture groups in the sequence of picture groups in order of key frame first and prediction frame second, determining the picture identification of the spatial picture in each of the plurality of picture groups, and generating the sequence of spatial pictures includes: and arranging the space pictures in each picture group in the picture group sequence according to the sequence of the first key frame and the second prediction frame by taking the space picture in the central position of the sub-matrix corresponding to the picture group as a key frame and taking the space picture adjacent to the key frame as a prediction frame, and determining the picture identification of the space picture in each picture group in the plurality of picture groups to generate the space picture sequence.
In some examples, the encoding the spatial picture sequence according to the key frames and the predicted frames included in each of the plurality of picture groups to generate the encoded file includes: and for each of the plurality of picture groups, encoding the spatial picture sequence by adopting a reference mode that the predicted frame in the picture group only references the key frame in the picture group, and generating an encoding file.
In a second aspect, an embodiment of the present application provides a decoding method applicable to a spatial picture, including: according to the obtained operation information, determining the target longitude and latitude corresponding to the target space picture expected by the user; determining a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in an encoding file according to the longitude and latitude of the target; and decoding the data in the position represented by the position information in the encoded file to obtain the target space picture.
In some examples, the determining, according to the target longitude and latitude, the target spatial picture and the key frame in the target picture group to which the target spatial picture belongs, the location information in the encoded file includes: determining a target picture group identifier of a target picture group to which the target space picture belongs and a target picture identifier of the target space picture according to the target longitude and latitude and the key frame longitude and latitude of key frames in each picture group in the coding file; and determining the position information according to the target picture group identifier and the target picture identifier.
In some examples, determining the target picture group identifier of the target picture group to which the target spatial picture belongs and the target picture identifier of the target spatial picture according to the target longitude and latitude and the key frame longitude and latitude of the key frame in each picture group in the encoded file includes: the picture group which belongs to the key frame and corresponds to the key frame with the longitude and latitude closest to the target longitude and latitude in the key frame longitude and latitude of the key frames in each picture group in the coding file is taken as the target picture group, and the target picture group identification is determined; and determining the target picture identification according to the offset between the target longitude and latitude and the key frame longitude and latitude corresponding to the key frame in the target picture group.
In some examples, decoding the data in the position represented by the position information in the encoded file to obtain the target spatial picture includes: determining whether the target space picture is a key frame in the target picture group or a predicted frame in the target picture group according to the target picture identification; and in response to determining that the target space picture is a predicted frame in the target picture group, decoding a key frame in the target picture group at the position represented by the position information and a predicted frame corresponding to the target picture identifier to obtain the target space picture.
In some examples, the decoding to obtain the target spatial picture according to the position information further includes: and in response to determining that the target space picture is a key frame in the target picture group, decoding the key frame in the target picture group at the position characterized by the position information to obtain the target space picture.
In a third aspect, an embodiment of the present application provides an encoding apparatus applicable to a spatial picture, including: the first generation unit is configured to generate a space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points according to the longitude and latitude corresponding to the plurality of sampling points under different space angles; the dividing unit is configured to divide the space picture matrix by a sub-matrix with a preset size to obtain a plurality of picture groups; and a second generation unit configured to encode the spatial picture in the spatial picture matrix according to the key frame and the predicted frame included in each of the plurality of picture groups, and generate an encoded file.
In some examples, the first generating unit is further configured to: and arranging a plurality of space pictures by taking the longitude of the sampling point corresponding to the space picture as the horizontal axis and taking the latitude of the sampling point corresponding to the space picture as the vertical axis to generate a space picture matrix.
In some examples, the second generating unit is further configured to: arranging the picture groups corresponding to sampling points with the same latitude in the plurality of picture groups according to the order of the longitude of the sampling points corresponding to the picture groups from small to large, and generating a plurality of picture group subsequences; arranging a plurality of picture group subsequences according to the order of the latitudes of sampling points corresponding to the picture groups from small to large, determining picture group identifiers corresponding to the picture groups, and generating a picture group sequence; according to the sequence of the key frames before the predicted frames, arranging the space pictures in each picture group in the picture group sequence, determining the picture identification of the space picture in each picture group in the plurality of picture groups, and generating a space picture sequence; and encoding the spatial picture sequence according to the key frames and the predicted frames included in each of the plurality of picture groups to generate an encoded file.
In some examples, the second generating unit is further configured to: and arranging the space pictures in each picture group in the picture group sequence according to the sequence of the first key frame and the second prediction frame by taking the space picture in the central position of the sub-matrix corresponding to the picture group as a key frame and taking the space picture adjacent to the key frame as a prediction frame, and determining the picture identification of the space picture in each picture group in the plurality of picture groups to generate the space picture sequence.
In some examples, the second generating unit is further configured to: and for each of the plurality of picture groups, encoding the spatial picture sequence by adopting a reference mode that the predicted frame in the picture group only references the key frame in the picture group, and generating an encoding file.
In a fourth aspect, an embodiment of the present application provides a decoding apparatus suitable for a spatial picture, including: the first determining unit is configured to determine the target longitude and latitude corresponding to the target space picture expected by the user according to the acquired operation information; the second determining unit is configured to determine a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in the encoded file according to the target longitude and latitude; and the decoding unit is configured to decode the data in the position represented by the position information in the encoded file to obtain the target space picture.
In some examples, the second determining unit is further configured to: determining a target picture group identifier of a target picture group to which the target space picture belongs and a target picture identifier of the target space picture according to the target longitude and latitude and the key frame longitude and latitude of key frames in each picture group in the coding file; and determining the position information according to the target picture group identifier and the target picture identifier.
In some examples, the second determining unit is further configured to: the picture group which belongs to the key frame and corresponds to the key frame with the longitude and latitude closest to the target longitude and latitude in the key frame longitude and latitude of the key frames in each picture group in the coding file is taken as the target picture group, and the target picture group identification is determined; and determining the target picture identification according to the offset between the target longitude and latitude and the key frame longitude and latitude corresponding to the key frame in the target picture group.
In some examples, the decoding unit described above is further configured to: determining whether the target space picture is a key frame in the target picture group or a predicted frame in the target picture group according to the target picture identification; and in response to determining that the target space picture is a predicted frame in the target picture group, decoding a key frame in the target picture group at the position represented by the position information and a predicted frame corresponding to the target picture identifier to obtain the target space picture.
In some examples, the decoding unit described above is further configured to: and in response to determining that the target space picture is a key frame in the target picture group, decoding the key frame in the target picture group at the position characterized by the position information to obtain the target space picture.
In a fifth aspect, embodiments of the present application provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as described in any of the implementations of the first and second aspects.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first and second aspects.
According to the encoding method and the encoding device suitable for the space picture, the space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points is generated according to the longitude and latitude corresponding to the sampling points under different space angles; dividing a space picture matrix by using a sub-matrix with a preset size to obtain a plurality of picture groups; according to key frames and predicted frames included in each of a plurality of picture groups, space pictures in a space picture matrix are encoded, and an encoding file is generated, so that the encoding method applicable to the space pictures is provided.
According to the decoding method and device suitable for the space picture, the target longitude and latitude corresponding to the target space picture expected by the user is determined according to the acquired operation information; determining a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in an encoding file according to the longitude and latitude of the target; the data in the position represented by the position information in the encoded file is decoded to obtain the target space picture, so that the decoding method suitable for the space picture is provided, in the decoding process, the target longitude and latitude corresponding to the target space picture expected by the user are mapped to the corresponding position in the encoded file, so that the data decoding in any position is realized, and the efficiency and the flexibility of the decoding process of the space picture are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a coding method applicable to spatial pictures according to the present application;
Fig. 3 is a schematic diagram of a spatial layout of sampling points in a warp and weft sampling manner according to the present embodiment;
fig. 4 is a schematic diagram of a spatial picture matrix according to the present embodiment;
fig. 5 is a schematic diagram of a reference manner between a predicted frame and a key frame in a group of pictures according to the present embodiment;
fig. 6 is a schematic diagram of an application scene of the encoding method applicable to a spatial picture according to the present embodiment;
fig. 7 is a flowchart of yet another embodiment of an encoding method applicable to a spatial picture according to the present application;
FIG. 8 is a flow chart of one embodiment of a decoding method suitable for spatial pictures according to the present application;
fig. 9 is a schematic diagram of a decoding track according to the present embodiment;
fig. 10 is a schematic view of a screen sliding trajectory according to the present embodiment;
fig. 11 is a flowchart of still another embodiment of a decoding method suitable for spatial pictures according to the present application;
FIG. 12 is a block diagram of one embodiment of an encoding apparatus adapted for spatial pictures according to the present application;
fig. 13 is a block diagram of one embodiment of a decoding apparatus adapted for spatial pictures according to the present application;
FIG. 14 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that, in the technical solution of the present disclosure, the related aspects of collecting, updating, analyzing, processing, using, transmitting, storing, etc. of the personal information of the user all conform to the rules of the related laws and regulations, and are used for legal purposes without violating the public order colloquial. Necessary measures are taken for the personal information of the user, illegal access to the personal information data of the user is prevented, and the personal information security, network security and national security of the user are maintained.
Fig. 1 shows an exemplary architecture 100 to which the present application may be applied for encoding methods and apparatuses for spatial pictures, and decoding methods and apparatuses for spatial pictures.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The communication connection between the terminal devices 101, 102, 103 constitutes a topology network, the network 104 being the medium for providing the communication link between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. The terminal devices 101, 102, 103 may be hardware devices or software supporting network connections for data interaction and data processing. When the terminal device 101, 102, 103 is hardware, it may be various electronic devices supporting network connection, information acquisition, interaction, display, processing, etc., including but not limited to smart phones, image capture devices, tablet computers, electronic book readers, laptop and desktop computers, etc. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server that provides various services, for example, a background processing server that receives the spatial pictures provided by the terminal devices 101, 102, 103, and encodes the resulting encoded files. For another example, the background processing server receives the operation information of the terminal devices 101, 102, 103, and decodes the operation information to obtain the target spatial picture desired by the user. As an example, the server 105 may be a cloud server.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., software or software modules for providing distributed services), or as a single software or software module. The present application is not particularly limited herein.
It should be further noted that, the encoding method and the decoding method applicable to the spatial picture provided by the embodiments of the present application may be executed by a server, or may be executed by a terminal device, or may be executed by the server and the terminal device in cooperation with each other. Accordingly, the respective portions (for example, respective units) included in the encoding device and the decoding device for the spatial picture may be provided in the server, the terminal device, or the server and the terminal device, respectively.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. When the encoding method applicable to the spatial picture and the electronic device on which the decoding method applicable to the spatial picture is operated do not need to perform data transmission with other electronic devices, the system architecture may include only the encoding method applicable to the spatial picture and the electronic device (e.g., a server or a terminal device) on which the decoding method applicable to the spatial picture is operated.
With continued reference to fig. 2, a flow 200 of one embodiment of a coding method suitable for spatial pictures is shown, comprising the steps of:
step 201, generating a spatial picture matrix including a plurality of spatial pictures acquired by a plurality of sampling points according to the longitudes and latitudes corresponding to the plurality of sampling points under different spatial angles.
In this embodiment, an execution body (for example, a terminal device or a server in fig. 1) of the encoding method for a spatial picture generates a spatial picture matrix including a plurality of spatial pictures acquired by a plurality of sampling points according to the longitudes and latitudes corresponding to the plurality of sampling points under different spatial angles. The space picture represents pictures of the target object obtained under different space angles. The target object may be various objects such as a person, an object, and the like.
As an example, the plurality of sampling points may be uniformly disposed at different spatial angles to take spatial pictures of the target object at different spatial angles, that is, a uniform sampling manner.
As yet another example, the plurality of sampling points may be unevenly disposed at different spatial angles in a manner that the number of longitude and latitude of the preset interval surrounds the target object, so as to take spatial pictures at different spatial angles of the target object, that is, the longitude and latitude sampling manner.
As shown in fig. 3, a schematic diagram 300 of the spatial layout of sampling points in a warp and weft sampling manner is shown. For the upper hemisphere of the target object, sampling is performed at intervals of 10 °, then there are 36 sampling points in the longitudinal direction and 9 sampling points in the latitudinal direction, and 324 (36×9) sampling points are in the upper hemisphere of the target object. The left sub-graph in fig. 3 is a top view of the spatial layout of the sampling points, and the right sub-graph is a front view of the spatial layout of the sampling points.
Longitude and latitude corresponding to the sampling points under different space angles are different, so that space pictures under different space angles are obtained through sampling. And arranging the space pictures under different space angles according to a certain arrangement sequence to obtain a space picture matrix. For each space picture, naming the space picture according to the longitude and latitude of the sampling point corresponding to the space picture; further, a plurality of spatial pictures are arranged according to the naming information of each spatial picture, and a spatial picture matrix is generated. For example, the longitude and latitude of the sampling point corresponding to the spatial picture is (10 ° ,20 ° ) The spatial picture may be named "10-20.Jpg".
In some optional implementations of this embodiment, the executing body may execute the step 201 as follows: and arranging a plurality of space pictures by taking the longitude of the sampling point corresponding to the space picture as the horizontal axis and taking the latitude of the sampling point corresponding to the space picture as the vertical axis to generate a space picture matrix.
Specifically, the longitude of the sampling point corresponding to the space picture is taken as the horizontal axis, the latitude of the sampling point corresponding to the space picture is taken as the vertical axis, and the plurality of space pictures are arranged in the order from the small longitude to the large latitude, so as to obtain the space picture matrix.
Continuing with the example of the sampling points shown in fig. 3, a spatial picture matrix is generated as shown in fig. 4. Wherein the latitude in the space picture matrix is in the range of 0 ° -80 ° Longitude range of 0 ° -350 °
In the implementation manner, based on the longitude and latitude of the sampling point, an arrangement manner which is more in line with the spatial correlation among a plurality of spatial pictures is provided, so that the generated spatial picture matrix is more beneficial to spatial coding, and the spatial coding efficiency is improved.
Step 202, dividing the space picture matrix by a sub-matrix with a preset size to obtain a plurality of picture groups.
In this embodiment, the executing body may divide the spatial image matrix with a sub-matrix of a preset size to obtain a plurality of image groups.
As an example, the preset size may be a fixed size set in advance. The length and width of the sub-matrix may be the same or different. For example, the preset size is 3×3.
As yet another example, the preset size may be flexibly determined according to the density of the sampling points. When the sampling points are denser, a larger preset size can be set; when the sampling points are sparse, a smaller preset size can be set, that is, the size of the preset size is positively correlated with the density of the sampling points.
Dividing the space picture matrix by a sub-matrix with a preset size, wherein the number of the space pictures included in each picture group is the same as the number of elements included in the sub-matrix. With continued reference to fig. 4, the spatial picture matrix is divided by a sub-matrix of a preset size of 3×3, resulting in 36 groups of GOP1-GOP36, each including 9 spatial pictures.
And 203, encoding the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups, and generating an encoded file.
In this embodiment, the executing body may encode the spatial picture in the spatial picture matrix according to the key frame and the predicted frame included in each of the plurality of picture groups, to generate the encoded file.
In compression encoding, each frame of spatial picture represents a still image. While actual compression is performed, various compression algorithms are employed to reduce the data capacity, with IPB frames being the most common one. The I frames in the IPB frames are also called key frames, intra-coded frames. The key frame is typically the first frame of each group of pictures, and is moderately compressed to serve as a reference point for random access to generate a still image. The key frame can be regarded as a compressed product of an image, and the compression can remove redundant information of the video. P frames, also called predicted frames, forward predictive coded frames. And removing redundant information which is the same as the compressed data corresponding to the key frames in the picture group from the compressed data corresponding to the predicted frames to obtain the encoded data corresponding to the predicted frames. The predicted frame represents a difference between the predicted frame and the corresponding key frame, and when decoding, it is necessary to refer to the corresponding key frame and the decoded data corresponding to the predicted frame, and generate a spatial picture corresponding to the predicted frame.
In this embodiment, a determination manner may be preset to determine a key frame and a predicted frame in a picture group; and further, encoding the spatial picture in the spatial picture matrix according to the key frame and the predicted frame respectively included in the plurality of picture groups, and generating an encoded file.
For example, for each of the plurality of picture groups, a spatial picture corresponding to a sampling point with the smallest longitude and latitude in the picture group is used as a key frame, and the rest of the spatial pictures in the picture group are used as prediction frames, so that the spatial pictures in the spatial picture matrix are encoded, and an encoded file is generated.
In some optional implementations of this embodiment, the executing body may execute the step 203 as follows:
first, according to the order of the longitude of the sampling point corresponding to the picture group from small to large, the picture groups corresponding to the sampling points with the same latitude in the plurality of picture groups are arranged, and a plurality of picture group subsequences are generated.
With continued reference to fig. 4, the plurality of picture group sub-sequences includes a first picture group sub-sequence, a second picture group sub-sequence, and a third picture group sub-sequence. Wherein the first group of pictures sub-sequence is "GOP 1- > GOP 2- > … … - > GOP12", the second group of pictures sub-sequence is "GOP 13- > GOP 14- > … … - > GOP24", and the third group of pictures sub-sequence is "GOP 25- > GOP 26- > … … - > GOP36".
Secondly, arranging a plurality of picture group subsequences according to the order of the latitudes of sampling points corresponding to the picture groups from small to large, determining picture group identifiers corresponding to the picture groups, and generating a picture group sequence.
With continued reference to fig. 4, the group of pictures sequence is "GOP 1- > GOP 2- > GOP3 … … - > GOP 35- > GOP36"
Thirdly, arranging the space pictures in each picture group in the picture group sequence according to the sequence of the key frame and the predicted frame, determining the picture identification of the space picture in each picture group in the plurality of picture groups, and generating the space picture sequence.
Specifically, on the basis of the determined picture group sequences, for the spatial pictures in each picture group, the picture sequences in the picture groups are determined according to the sequence of the key frames and the predicted frames, so that the picture identification of the spatial pictures in each picture group is determined according to the picture sequences corresponding to the picture groups, and finally the spatial picture sequences are obtained.
For a plurality of predicted frames in each picture group, an arrangement order of the plurality of predicted frames may be determined in a preset determination manner. For example, a plurality of prediction frames are arranged in the order of the latitude from small to large and the longitude from small to large of the sampling point corresponding to the spatial picture.
With continued reference to fig. 4, the spatial picture sequence is "0 >1 >2 >3 … … >322 >323". The spatial picture sequence includes a plurality of picture sequences such as "0- (1) - (2) - (3- (… …) - (8) - (9) - (10) - (11) - (… …) - (17)". Taking the picture sequence of "0- (1- (2) -3- (… …) -8" as an example, "0" is the picture identification of the key frame in the picture sequence, and "1-8" is the picture identification of the predicted frame in the picture sequence.
Fourth, the spatial picture sequence is encoded according to the key frames and the predicted frames included in each of the plurality of picture groups, and an encoded file is generated.
In the implementation manner, the spatial picture matrix is firstly arranged to obtain the spatial picture sequence, and then coding is carried out according to the arrangement sequence of the spatial pictures in the spatial picture sequence and the key frames and the predicted frames in each picture group, so that the efficiency and the accuracy of the coding process are further improved.
In some optional implementations of this embodiment, the executing body may execute the third step by: and arranging the space pictures in each picture group in the picture group sequence according to the sequence of the first key frame and the second prediction frame by taking the space picture in the central position of the sub-matrix corresponding to the picture group as a key frame and taking the space picture adjacent to the key frame as a prediction frame, and determining the picture identification of the space picture in each picture group in the plurality of picture groups to generate the space picture sequence.
Continuing with the 3×3 sub-matrix shown in fig. 4 as an example, for 9 space pictures in the divided picture group, 8 space pictures around the center position have higher relevance to the space picture at the center position, because all the 8 space pictures around can be regarded as 10 from the space picture at the center position in the longitudinal direction and/or the latitudinal direction ° Obtained by a variation of (a).
The spatial picture at the central position of the sub-matrix corresponding to the picture group is used as a key frame, and the spatial picture adjacent to the key frame is used as a predicted frame, so that the predicted frame and the key frame have direct and strong relevance, direct reference of the predicted frame to the key frame in the encoding process of the spatial picture is facilitated, the encoding efficiency is improved, and the data volume of the encoded file is reduced.
In some optional implementations of this embodiment, the executing body may execute the fourth step by: and for each of the plurality of picture groups, encoding the spatial picture sequence by adopting a reference mode that the predicted frame in the picture group only references the key frame in the picture group, and generating an encoding file.
With continued reference to fig. 5, a schematic diagram 500 of a reference pattern between predicted frames and key frames in a group of pictures is shown. For each predicted frame in a group of pictures, the key frames in the group of pictures are uniquely referenced for encoding to obtain an encoded file.
In the implementation mode, the spatial picture sequence is encoded by adopting the reference mode of the predicted frame in the picture group and the key frame in the unique reference picture group, so that the complexity of the relation between the predicted frame and the key frame in the encoded file is reduced, and the determination speed and the decoding efficiency of data in the decoding process are improved.
With continued reference to fig. 6, fig. 6 is a schematic diagram 600 of an application scenario of the encoding method applicable to spatial pictures according to the present embodiment. In the application scenario of fig. 6, first, a plurality of spatial pictures for a target object are sampled from a plurality of sampling points at different spatial angles by an image acquisition device. The spatial layout of the plurality of sampling points is shown as 601. Then, according to the longitude and latitude corresponding to the plurality of sampling points under different space angles, a space picture matrix comprising a plurality of space pictures acquired by the plurality of sampling points is generated, and the space picture matrix is shown as 602. Then, the spatial picture matrix 602 is divided by a sub-matrix 6021 of a preset size, resulting in a plurality of picture groups. And finally, coding the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate a coding file.
According to the method provided by the embodiment of the application, the spatial picture matrix comprising a plurality of spatial pictures acquired by a plurality of sampling points is generated according to the longitude and latitude corresponding to the plurality of sampling points under different spatial angles; dividing a space picture matrix by using a sub-matrix with a preset size to obtain a plurality of picture groups; according to key frames and predicted frames included in each of a plurality of picture groups, space pictures in a space picture matrix are encoded, and an encoding file is generated, so that the encoding method applicable to the space pictures is provided.
With continued reference to fig. 7, there is shown a schematic flow chart 700 of a further embodiment of a coding method applicable to spatial pictures according to the present application, comprising the steps of:
in step 701, a plurality of spatial pictures are arranged with the longitude of the sampling point corresponding to the spatial picture as the horizontal axis and the latitude of the sampling point corresponding to the spatial picture as the vertical axis, so as to generate a spatial picture matrix.
Step 702, dividing the spatial image matrix with a sub-matrix of a preset size to obtain a plurality of image groups.
In step 703, the picture groups corresponding to the sampling points with the same latitude in the plurality of picture groups are arranged according to the order of the longitude of the sampling points corresponding to the picture groups from small to large, so as to generate a plurality of picture group subsequences.
Step 704, arranging the plurality of picture group subsequences according to the order of the latitudes of the sampling points corresponding to the picture groups from small to large, determining the picture group identifications corresponding to the plurality of picture groups, and generating a picture group sequence.
Step 705, using the spatial picture at the center of the sub-matrix corresponding to the picture group as a key frame, using the spatial picture adjacent to the key frame as a predicted frame, arranging the spatial pictures in each picture group in the picture group sequence according to the sequence of the key frame and the predicted frame, determining the picture identification of the spatial picture in each picture group in the plurality of picture groups, and generating the spatial picture sequence.
Step 706, for each of the plurality of picture groups, encoding the spatial picture sequence by using a reference mode that the predicted frame in the picture group uniquely references the key frame in the picture group, and generating an encoded file.
As can be seen from this embodiment, compared with the embodiment corresponding to fig. 2, the process 700 of the encoding method applicable to the spatial picture in this embodiment specifically illustrates the generation process of the spatial picture matrix, the determination process of the spatial picture sequence and the generation process of the encoded file, which further improves the efficiency and accuracy of the encoding process for the spatial picture.
With continued reference to fig. 8, there is shown a schematic flow chart 800 of one embodiment of a decoding method applicable to spatial pictures according to the present application, including the steps of:
step 801, determining a target longitude and latitude corresponding to a target space picture expected by a user according to the acquired operation information.
In this embodiment, an execution body (for example, a terminal device or a server in fig. 1) of the decoding method applicable to the spatial picture may determine, according to the obtained operation information, a target longitude and latitude corresponding to the target spatial picture desired by the user.
The operation information may be an action instruction corresponding to a sliding operation of the user or a voice instruction corresponding to voice information. With continued reference to fig. 9, an operation track of the user in the spatial picture matrix is shown, along with the operation track, the executing body aims to decode the spatial picture data at the corresponding position, so as to obtain and display the target spatial picture expected by the user.
As an example, the executing body may pre-establish a correspondence between an operation position of the user on the screen and a target longitude and latitude corresponding to a target space picture expected by the user, so as to determine, in real time, the target longitude and latitude corresponding to the target space picture expected by the user in a process of executing the operation action by the user. The longitude and latitude of the target corresponding to the target space picture is the longitude and latitude corresponding to the sampling point corresponding to the target space picture.
Step 802, determining a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in an encoded file according to the target longitude and latitude.
In this embodiment, the executing body may determine, according to the target longitude and latitude, the target spatial picture and the location information of the key frame in the target picture group to which the target spatial picture belongs in the encoded file. The encoded file is the encoded file obtained in the above embodiments 200 and 700.
The position information includes position information of the target spatial picture in the encoding file and position information of a key frame in the target picture group to which the target spatial picture belongs in the encoding file.
As an example, the execution subject may previously establish a correspondence between longitude and latitude corresponding to each spatial picture referred to in the encoded file and position information of the spatial picture in the encoded file. Therefore, the position information of the target space picture in the coding file is determined according to the longitude and latitude of the target; and determining the longitude and latitude of the key frame in the target picture group to which the target space picture belongs according to the target longitude and latitude, and further determining the position information of the key frame in the encoding file.
In some optional implementations of this embodiment, the executing body may execute the step 802 as follows:
first, determining a target picture group identifier of a target picture group to which a target space picture belongs and a target picture identifier of the target space picture according to the target longitude and latitude and the key frame longitude and latitude of key frames in each picture group in the coding file.
As an example, the executing body may determine a key frame longitude and latitude of a key frame in each picture group in the encoded file, and generate a key frame longitude and latitude set; further, comparing the target longitude and latitude with the longitude and latitude of the key frames in the longitude and latitude set of the key frames, and determining a target picture group to which the target space picture belongs according to a comparison result between the target longitude and latitude and the longitude and latitude of each key frame in the longitude and latitude set of the key frames; further, a target picture group identification of the target picture group and a target picture identification of the target spatial picture are determined.
With continued reference to FIG. 4, the corresponding keyframe longitude and latitude set is
And secondly, determining position information according to the target picture group identifier and the target picture identifier.
In the process of obtaining an encoded file through encoding, a spatial picture identifier of a spatial picture and a picture group identifier of a picture group to which the spatial picture belongs are generally encoded. After the target picture group identifier and the target picture identifier are determined, the position information of the target space picture in the encoded file and the position information of the key frame in the target picture group to which the target space picture belongs in the encoded file can be determined in the encoded file.
In the implementation manner, a specific implementation manner for determining the position information of the target space picture in the encoded file and the position information of the key frame in the target picture group to which the target space picture belongs in the encoded file is provided, so that the determination efficiency and accuracy of the position information determination process are improved.
In some optional implementations of this embodiment, the executing body may execute the first step by: firstly, taking a picture group which belongs to a key frame and corresponds to the key frame with the longitude and latitude closest to the target longitude and latitude as a target picture group in the key frame longitudes and latitudes of key frames in each picture group in an encoding file, and determining a target picture group identifier; and then, determining the target picture identification according to the offset between the target longitude and latitude and the longitude and latitude of the key frame corresponding to the key frame in the target picture group.
As an example, the target longitude and latitude is (60, 20) which is closest to the distance between the key frame longitude and latitude (70, 10) in the key frame longitude and latitude set, the key frame longitude and latitude (70, 10)
And taking the GOP3 of the picture group to which the corresponding key frame belongs as a target picture group, and determining the target picture group identifier as 3.
Then, the offset between the target longitude and latitude and the key frame longitude and latitude corresponding to the key frame in the target picture group is determined to be (-10, 10), and the target picture identification is determined to be 8.
Continuing with the 3×3 submatrix shown in fig. 4 as an example, the offset between the longitude and latitude of the spatial picture in each picture group and the longitude and latitude of the key frame corresponding to the key frame is:
reordering the offset sets according to the arrangement sequence of the space pictures in the picture sequence corresponding to the picture group in the coding process, and obtaining an ordered offset sequence as follows:
OFFSETs
={(0,0),(-10,0),(-10,-10),(0,-10),(10,-10),(10,0),(10,10),(0,10),(-10,10)}
and according to the offset sequence, determining a target picture identifier corresponding to the target space picture.
In the implementation manner, the target picture group identification of the target picture group is determined according to the comparison result of the key frame longitude and latitude of each key frame related in the coding file and the target longitude and latitude of the target space picture expected by the user, so that the target picture identification is determined, and the universality and the accuracy of the identification information determination process are improved.
And 803, decoding the data in the position represented by the position information in the encoded file to obtain the target space picture.
In this example, the execution body may decode data in the encoded file at a position represented by the position information, to obtain the target spatial picture.
After the position information of the target data to be decoded is determined, the corresponding position coding data in the coding file can be decoded, and the target space picture is decoded and displayed.
In some optional implementations of this embodiment, the executing body may execute the step 803 as follows:
first, according to the target picture identification, it is determined whether the target spatial picture is a key frame in the target picture group or a predicted frame in the target picture group.
As an example, when it is determined that the target picture identification is the same as the key frame identification of the key frames in the target picture group, it is determined that the target spatial picture is a key frame in the target picture group; when the target picture identification is determined to be the same as the predicted frame identification of the predicted frames in the target picture group, the target spatial picture is determined to be the predicted frame in the target picture group.
And secondly, in response to determining that the target space picture is a predicted frame in the target picture group, decoding a key frame in the target picture group at the position represented by the position information and a predicted frame corresponding to the target picture identifier to obtain the target space picture.
When the target spatial picture is a predicted frame in the target picture group, since the predicted frame refers to a key frame in the picture group, the key frame in the target picture group at the position represented by the position information and the predicted frame corresponding to the target picture identifier need to be decoded at the same time to obtain the target spatial picture.
In some optional implementations of this embodiment, the executing body may further execute the step 803 as follows: and in response to determining that the target space picture is a key frame in the target picture group, decoding the key frame in the target picture group at the position characterized by the position information to obtain the target space picture.
When the target space picture is the key frame in the target picture group, the key frame in the target picture group at the position represented by the position information can be directly decoded to obtain the target space picture, and other space pictures do not need to be referred to.
With continued reference to fig. 10, a schematic diagram 1000 of a decoding trace is shown. In the embodiment, according to the acquired operation information, determining the target longitude and latitude corresponding to the target space picture expected by the user; determining a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in an encoding file according to the longitude and latitude of the target; the data in the position represented by the position information in the encoded file is decoded to obtain the target space picture, so that the decoding method applicable to the space picture is provided.
With continued reference to fig. 11, there is shown a schematic flow 1100 of one embodiment of a decoding method for spatial pictures according to the present application, comprising the steps of:
step 1101, determining a target longitude and latitude corresponding to the target space picture expected by the user according to the obtained operation information.
In step 1102, a picture group to which a key frame corresponding to a key frame longitude and latitude closest to the target longitude and latitude belongs is used as a target picture group in key frame longitudes and latitudes of key frames in each picture group in the encoded file, and a target picture group identifier is determined.
In step 1103, the target picture identifier is determined according to the offset between the target longitude and latitude and the longitude and latitude of the key frame corresponding to the key frame in the target picture group.
Step 1104, determining the target space picture and the key frame in the target picture group to which the target space picture belongs according to the target picture group identifier and the target picture identifier, and the position information in the encoded file.
Step 1105, determining whether the target spatial picture is a key frame in the target picture group or a predicted frame in the target picture group according to the target picture identifier.
In step 1106, in response to determining that the target spatial picture is a predicted frame in the target picture group, the key frame in the target picture group at the position characterized by the position information and the predicted frame corresponding to the target picture identifier are decoded, so as to obtain the target spatial picture.
In step 1107, in response to determining that the target spatial picture is a key frame in the target picture group, the key frame in the target picture group at the position characterized by the position information is decoded to obtain the target spatial picture.
As can be seen from this embodiment, compared with the embodiment corresponding to fig. 8, the process 1100 of the decoding method applicable to a spatial picture in this embodiment specifically illustrates the determining process of the location information and the decoding process of the target spatial picture, which further improves the efficiency and flexibility of the decoding process for the spatial picture.
With continued reference to fig. 12, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an encoding apparatus suitable for spatial pictures, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 12, the encoding apparatus applied to a spatial picture includes: the first generating unit 1201 is configured to generate a spatial picture matrix including a plurality of spatial pictures acquired by a plurality of sampling points according to the longitude and latitude corresponding to the plurality of sampling points under different spatial angles; a dividing unit 1202 configured to divide the spatial picture matrix by a sub-matrix of a preset size to obtain a plurality of picture groups; the second generating unit 1203 is configured to encode the spatial picture in the spatial picture matrix according to the key frame and the predicted frame included in each of the plurality of picture groups, and generate an encoded file.
In some optional implementations of this embodiment, the first generating unit 1201 is further configured to: and arranging a plurality of space pictures by taking the longitude of the sampling point corresponding to the space picture as the horizontal axis and taking the latitude of the sampling point corresponding to the space picture as the vertical axis to generate a space picture matrix.
In some optional implementations of this embodiment, the second generating unit 1203 is further configured to: arranging the picture groups corresponding to sampling points with the same latitude in the plurality of picture groups according to the order of the longitude of the sampling points corresponding to the picture groups from small to large, and generating a plurality of picture group subsequences; arranging a plurality of picture group subsequences according to the order of the latitudes of sampling points corresponding to the picture groups from small to large, determining picture group identifiers corresponding to the picture groups, and generating a picture group sequence; according to the sequence of the key frames before the predicted frames, arranging the space pictures in each picture group in the picture group sequence, determining the picture identification of the space picture in each picture group in the plurality of picture groups, and generating a space picture sequence; and encoding the spatial picture sequence according to the key frames and the predicted frames included in each of the plurality of picture groups to generate an encoded file.
In some optional implementations of this embodiment, the second generating unit 1203 is further configured to: and arranging the space pictures in each picture group in the picture group sequence according to the sequence of the first key frame and the second prediction frame by taking the space picture in the central position of the sub-matrix corresponding to the picture group as a key frame and taking the space picture adjacent to the key frame as a prediction frame, and determining the picture identification of the space picture in each picture group in the plurality of picture groups to generate the space picture sequence.
In some optional implementations of this embodiment, the second generating unit 1203 is further configured to: and for each of the plurality of picture groups, encoding the spatial picture sequence by adopting a reference mode that the predicted frame in the picture group only references the key frame in the picture group, and generating an encoding file.
In this embodiment, a first generating unit in an encoding device suitable for a spatial picture generates a spatial picture matrix including a plurality of spatial pictures acquired by a plurality of sampling points according to longitudes and latitudes corresponding to the plurality of sampling points under different spatial angles; the dividing unit divides the space picture matrix by a sub-matrix with a preset size to obtain a plurality of picture groups; the second generating unit encodes the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate an encoding file, so that the encoding device suitable for the space pictures is provided, the space pictures are reorganized in the encoding process to obtain the space picture matrix, the picture groups in the space picture matrix are designated to manage the space pictures in the picture groups, decoupling of the space pictures in time is achieved, and efficiency and accuracy of the encoding process of the space pictures are improved.
With continued reference to fig. 13, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a decoding apparatus suitable for spatial pictures, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 8, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 13, a decoding apparatus applied to a spatial picture includes: a first determining unit 1301 configured to determine, according to the acquired operation information, a target longitude and latitude corresponding to a target space picture desired by a user; a second determining unit 1302 configured to determine, according to the target longitude and latitude, the target space picture and the key frame in the target picture group to which the target space picture belongs, the position information in the encoded file; the decoding unit 1303 is configured to decode data in a position represented by the position information in the encoded file, to obtain a target spatial picture.
In some optional implementations of this embodiment, the second determining unit 1302 is further configured to: determining a target picture group identifier of a target picture group to which the target space picture belongs and a target picture identifier of the target space picture according to the target longitude and latitude and the key frame longitude and latitude of key frames in each picture group in the coding file; and determining the position information according to the target picture group identifier and the target picture identifier.
In some optional implementations of this embodiment, the second determining unit 1302 is further configured to: the picture group which belongs to the key frame and corresponds to the key frame with the longitude and latitude closest to the target longitude and latitude in the key frame longitude and latitude of the key frames in each picture group in the coding file is taken as the target picture group, and the target picture group identification is determined; and determining the target picture identification according to the offset between the target longitude and latitude and the key frame longitude and latitude corresponding to the key frame in the target picture group.
In some optional implementations of this embodiment, the decoding unit 1303 is further configured to: determining whether the target space picture is a key frame in the target picture group or a predicted frame in the target picture group according to the target picture identification; and in response to determining that the target space picture is a predicted frame in the target picture group, decoding a key frame in the target picture group at the position represented by the position information and a predicted frame corresponding to the target picture identifier to obtain the target space picture.
In some optional implementations of this embodiment, the decoding unit 1303 is further configured to: and in response to determining that the target space picture is a key frame in the target picture group, decoding the key frame in the target picture group at the position characterized by the position information to obtain the target space picture.
In this embodiment, the first determining unit determines, according to the obtained operation information, a target longitude and latitude corresponding to a target space picture expected by a user; the second determining unit determines a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in the encoded file according to the target longitude and latitude; the decoding unit decodes the data in the position represented by the position information in the encoded file to obtain the target space picture, so that the decoding device suitable for the space picture is provided, and in the decoding process, the target longitude and latitude corresponding to the target space picture expected by the user are mapped to the corresponding position in the encoded file, so that the data decoding in any position is realized, and the efficiency and the flexibility of the decoding process of the space picture are improved.
Referring now to FIG. 14, there is illustrated a schematic diagram of a computer system 1400 suitable for use with devices (e.g., devices 101, 102, 103, 105 shown in FIG. 1) implementing embodiments of the present application. The apparatus shown in fig. 14 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present application.
As shown in fig. 14, the computer system 1400 includes a processor (e.g., CPU, central processing unit) 1401, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1402 or a program loaded from a storage section 1408 into a Random Access Memory (RAM) 1403. In the RAM1403, various programs and data required for the operation of the system 1400 are also stored. The processor 1401, ROM1402, and RAM1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
The following components are connected to the I/O interface 1405: an input section 1406 including a keyboard, a mouse, and the like; an output portion 1407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1408 including a hard disk or the like; and a communication section 1409 including a network interface card such as a LAN card, a modem, and the like. The communication section 1409 performs communication processing via a network such as the internet. The drive 1410 is also connected to the I/O interface 1405 as needed. Removable media 1411, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memory, and the like, is installed as needed on drive 1410 so that a computer program read therefrom is installed as needed into storage portion 1408.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1409 and/or installed from the removable medium 1411. The above-described functions defined in the method of the present application are performed when the computer program is executed by the processor 1401.
The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the client computer, partly on the client computer, as a stand-alone software package, partly on the client computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the client computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes a first generation unit, a division unit, and a second generation unit; as another example, it can be described as: a processor includes a first determination unit, a second determination unit, and a decoding unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the second determining unit may also be described as "a unit that determines the target spatial picture and the key frame in the target picture group to which the target spatial picture belongs, the position information in the encoded file, according to the target longitude and latitude".
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the computer device to: generating a space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points according to the longitudes and latitudes corresponding to the sampling points under different space angles; dividing a space picture matrix by using a sub-matrix with a preset size to obtain a plurality of picture groups; and encoding the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate an encoded file.
The one or more programs, when executed by the apparatus, further cause the computer device to: according to the obtained operation information, determining the target longitude and latitude corresponding to the target space picture expected by the user; determining a target space picture and position information of a key frame in a target picture group to which the target space picture belongs in an encoding file according to the longitude and latitude of the target; and decoding the data in the position represented by the position information in the encoded file to obtain the target space picture.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (14)

1. A coding method applicable to spatial pictures, comprising:
generating a space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points according to the longitudes and latitudes corresponding to the sampling points under different space angles;
Dividing the space picture matrix by a subarray with a preset size to obtain a plurality of picture groups;
and encoding the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate an encoding file.
2. The method of claim 1, wherein the generating a spatial picture matrix including a plurality of spatial pictures acquired by a plurality of sampling points according to longitudes and latitudes corresponding to the plurality of sampling points under different spatial angles includes:
and arranging the plurality of space pictures by taking the longitude of the sampling point corresponding to the space picture as the horizontal axis and taking the latitude of the sampling point corresponding to the space picture as the vertical axis to generate the space picture matrix.
3. The method of claim 1, wherein the encoding the spatial picture in the spatial picture matrix according to the key frame and the predicted frame included in each of the plurality of picture groups, to generate the encoded file, comprises:
arranging the picture groups corresponding to sampling points with the same latitude in the picture groups according to the order of the longitude of the sampling points corresponding to the picture groups from small to large, and generating a plurality of picture group subsequences;
arranging the plurality of picture group subsequences according to the order of the latitudes of sampling points corresponding to the picture groups from small to large, determining picture group identifiers corresponding to the plurality of picture groups, and generating a picture group sequence;
According to the sequence of the key frames before the predicted frames, arranging the space pictures in each picture group in the picture group sequence, determining the picture identification of the space picture in each picture group in the plurality of picture groups, and generating a space picture sequence;
and encoding the spatial picture sequence according to the key frames and the predicted frames included in each of the plurality of picture groups, and generating the encoded file.
4. A method according to claim 3, wherein the arranging the spatial pictures in each of the sequence of picture groups in order of key frames followed by predicted frames, determining a picture identification of the spatial pictures in each of the plurality of picture groups, and generating the sequence of spatial pictures comprises:
and arranging the space pictures in each picture group in the picture group sequence according to the sequence of the first key frame and the second prediction frame by taking the space picture in the central position of the sub-matrix corresponding to the picture group as a key frame and taking the space picture adjacent to the key frame as a prediction frame, and determining the picture identification of the space picture in each picture group in the plurality of picture groups to generate the space picture sequence.
5. The method according to claim 3 or 4, wherein the encoding the sequence of spatial pictures according to key frames and predicted frames included in each of the plurality of picture groups, generating the encoded file, comprises:
And for each picture group in the plurality of picture groups, encoding the spatial picture sequence by adopting a reference mode that a predicted frame in the picture group uniquely references a key frame in the picture group, and generating the encoding file.
6. A decoding method applicable to spatial pictures, comprising:
according to the obtained operation information, determining the target longitude and latitude corresponding to the target space picture expected by the user;
determining the target space picture and the key frame in the target picture group to which the target space picture belongs according to the target longitude and latitude, and the position information in the encoded file;
and decoding the data in the position represented by the position information in the encoded file to obtain the target space picture.
7. The method of claim 6, wherein the determining, according to the target longitude and latitude, the target spatial picture and the key frame in the target picture group to which the target spatial picture belongs, the location information in the encoded file includes:
determining a target picture group identifier of a target picture group to which the target space picture belongs and a target picture identifier of the target space picture according to the target longitude and latitude and the key frame longitude and latitude of a key frame in each picture group in the coding file;
And determining the position information according to the target picture group identifier and the target picture identifier.
8. The method of claim 7, wherein the determining the target picture group identifier of the target picture group to which the target spatial picture belongs and the target picture identifier of the target spatial picture according to the target longitude and latitude and the key frame longitude and latitude of the key frame in each picture group in the encoded file comprises:
the picture group which belongs to the key frame corresponding to the key frame longitude and latitude closest to the target longitude and latitude in the key frame longitude and latitude in each picture group in the coding file is used as the target picture group, and the target picture group identification is determined;
and determining the target picture identification according to the offset between the target longitude and latitude and the longitude and latitude of the key frame corresponding to the key frame in the target picture group.
9. The method according to claim 7 or 8, wherein said decoding the data in the encoded file at the location characterized by the location information to obtain the target spatial picture comprises:
determining whether the target space picture is a key frame in the target picture group or a predicted frame in the target picture group according to the target picture identification;
And in response to determining that the target spatial picture is a predicted frame in the target picture group, decoding a key frame in the target picture group at a position characterized by the position information and a predicted frame corresponding to the target picture identifier to obtain the target spatial picture.
10. The method of claim 9, wherein the decoding the target spatial picture according to the location information further comprises:
and in response to determining that the target spatial picture is a key frame in the target picture group, decoding the key frame in the target picture group at the position characterized by the position information to obtain the target spatial picture.
11. An encoding device suitable for spatial pictures, comprising:
the first generation unit is configured to generate a space picture matrix comprising a plurality of space pictures acquired by a plurality of sampling points according to the longitude and latitude corresponding to the sampling points under different space angles;
the dividing unit is configured to divide the space picture matrix by a submatrix with a preset size to obtain a plurality of picture groups;
and the second generation unit is configured to encode the space pictures in the space picture matrix according to the key frames and the predicted frames respectively included in the plurality of picture groups to generate an encoded file.
12. A decoding device suitable for spatial pictures, comprising:
the first determining unit is configured to determine the target longitude and latitude corresponding to the target space picture expected by the user according to the acquired operation information;
a second determining unit configured to determine, according to the target longitude and latitude, the target spatial picture and the key frame in the target picture group to which the target spatial picture belongs, and position information in an encoding file;
and the decoding unit is configured to decode the data in the position represented by the position information in the encoded file to obtain the target space picture.
13. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-10.
14. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-10.
CN202310484498.2A 2023-04-28 2023-04-28 Encoding method, decoding method and device applicable to spatial pictures Pending CN116708806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310484498.2A CN116708806A (en) 2023-04-28 2023-04-28 Encoding method, decoding method and device applicable to spatial pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310484498.2A CN116708806A (en) 2023-04-28 2023-04-28 Encoding method, decoding method and device applicable to spatial pictures

Publications (1)

Publication Number Publication Date
CN116708806A true CN116708806A (en) 2023-09-05

Family

ID=87824723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310484498.2A Pending CN116708806A (en) 2023-04-28 2023-04-28 Encoding method, decoding method and device applicable to spatial pictures

Country Status (1)

Country Link
CN (1) CN116708806A (en)

Similar Documents

Publication Publication Date Title
CN111314733B (en) Method and apparatus for evaluating video sharpness
CN108965907B (en) Method, device and system for playing video
CN109857908B (en) Method and apparatus for matching videos
CN110213614B (en) Method and device for extracting key frame from video file
US11758088B2 (en) Method and apparatus for aligning paragraph and video
CN110738657B (en) Video quality evaluation method and device, electronic equipment and storage medium
US11514263B2 (en) Method and apparatus for processing image
CN110248189B (en) Video quality prediction method, device, medium and electronic equipment
CN113327599B (en) Voice recognition method, device, medium and electronic equipment
CN112423140A (en) Video playing method and device, electronic equipment and storage medium
CN113177450A (en) Behavior recognition method and device, electronic equipment and storage medium
CN113395538B (en) Sound effect rendering method and device, computer readable medium and electronic equipment
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN111385576B (en) Video coding method and device, mobile terminal and storage medium
CN114125432B (en) Video data processing method, device, equipment and storage medium
CN113038176B (en) Video frame extraction method and device and electronic equipment
CN111783731B (en) Method and device for extracting video features
CN110891195B (en) Method, device and equipment for generating screen image and storage medium
CN111797665B (en) Method and apparatus for converting video
CN112182112A (en) Block chain based distributed data dynamic storage method and electronic equipment
CN116708806A (en) Encoding method, decoding method and device applicable to spatial pictures
CN116527914A (en) Decoding method and device suitable for space image
CN116527936A (en) Coding method and device suitable for space image
CN115103191A (en) Image processing method, device, equipment and storage medium
CN110290381B (en) Video quality evaluation method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination