CN110248116B - Picture processing method and device, computer equipment and storage medium - Google Patents

Picture processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110248116B
CN110248116B CN201910495571.XA CN201910495571A CN110248116B CN 110248116 B CN110248116 B CN 110248116B CN 201910495571 A CN201910495571 A CN 201910495571A CN 110248116 B CN110248116 B CN 110248116B
Authority
CN
China
Prior art keywords
file
picture
video
live
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910495571.XA
Other languages
Chinese (zh)
Other versions
CN110248116A (en
Inventor
邹全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910495571.XA priority Critical patent/CN110248116B/en
Publication of CN110248116A publication Critical patent/CN110248116A/en
Application granted granted Critical
Publication of CN110248116B publication Critical patent/CN110248116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application relates to a picture processing method, a picture processing device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring information of a live picture; the information comprises an original video file and a static picture file; embedding the video data in the original video file into the static picture file, and synthesizing a picture file comprising the video data; sending the synthesized picture file; when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file. According to the scheme, the cost for transmitting live pictures can be reduced.

Description

Picture processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing an image, a computer device, and a storage medium.
Background
With the rapid development of scientific technology, a great deal of advanced technology is emerging continuously, which brings very important influence to the life and work of people. Live pictures, as a new presentation, appear in the field of view of the public. Live Photo (Live Photo), which is a file obtained by reserving videos before and after shooting on the basis of shooting ordinary photos. Thus, a live picture generally includes a video file and a picture file.
In the traditional method, when a live picture is sent, a video file and a picture file of the live picture need to be sent respectively, and then a receiving party needs to display the live picture according to the received video file and the received picture file. Therefore, the sender and the receiver must have both video file transmission capability and picture file transmission capability, so that the requirement on equipment is high, and the cost is high.
Disclosure of Invention
In view of the above, it is necessary to provide a picture processing method, an apparatus, a computer device and a storage medium for solving the problem of relatively high cost of the conventional method.
A method of picture processing, the method comprising:
acquiring information of a live picture; the information comprises an original video file and a static picture file;
embedding the video data in the original video file into the static picture file, and synthesizing a picture file comprising the video data;
sending the synthesized picture file;
when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file.
In one embodiment, the sending the synthesized picture file comprises:
uploading the synthesized picture file;
the picture file is used for instructing the live display end to read video data from the picture file after being downloaded by the live display end, generating a video file according to the read video data, and playing the generated video file when the picture file is displayed.
In one embodiment, the method further comprises:
displaying the visual identification of the uploaded picture file in an uploaded file set; the uploaded file set is a set of visual identifications of uploaded files;
and displaying a live picture mark corresponding to the visual identification of the picture file.
In one embodiment, the method further comprises:
when a viewing instruction for the uploaded picture file is received, downloading the synthesized picture file;
reading video data from the picture file, and generating a video file according to the video data;
and playing the generated video file when the picture file is displayed.
In one embodiment, the generating a video file from the video data includes:
determining the corresponding position of each read video data in the original video file;
sequencing the video data according to the sequence of the corresponding positions of the video data from front to back;
and sequentially splicing the video data according to the ascending order of the sequence to obtain a video file.
In one embodiment, the playing the generated video file while displaying the picture file includes:
decoding the picture file by an image decoder in a presentation library of integrated live pictures;
decoding, by a video decoder in the presentation library, the generated video file;
screening view components adapted to a local operating system from view components provided in the display library;
and when the picture file is displayed through the screened view component, playing the generated video file.
In one embodiment, the reading video data from the picture file and generating a video file according to the video data includes:
separating video data from the downloaded picture file through a separation tool in the display library;
generating a video file according to the separated video data;
when the picture file is displayed through the screened view component, playing the generated video file comprises:
and when the picture file with the video data separated out is displayed through the screened view component, playing the generated video file.
In one embodiment, the uploaded picture file is further used for instructing, after being downloaded by a non-live showing end, the non-live showing end to read picture data included in the picture file and show a still picture corresponding to the picture data.
In one embodiment, the embedding the video data in the original video file into the still picture file, and the synthesizing a picture file including video data includes:
determining a custom data storage area in the static picture file;
storing the video data of the original video file in the video storage object of the user-defined data storage area to obtain a picture file comprising the video data;
wherein the video storage object comprises a live picture identifier; the live picture identifier is used for indicating that the video data stored by the video storage object belong to the video data included in the live picture.
In one embodiment, the video memory object comprises a videomark segment; in the video storage object of the custom data storage area, storing the video data of the original video file comprises:
when the size of the original video file exceeds the data storage capacity of a single videomark segment, then
Segmenting the original video file according to the data storage capacity; the size of each video data obtained by segmentation is smaller than or equal to the data storage capacity;
and adding the video data obtained by segmentation into the data storage area of each video marking segment.
In one embodiment, the video storage object comprises a block of data; in the video storage object of the custom data storage area, storing the video data of the original video file comprises:
creating a video storage data block in the custom data storage area;
and storing the video data of the original video file in the created data storage area of the data block.
A picture processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring information of a live picture; the information comprises an original video file and a static picture file;
the synthesis module is used for embedding the video data in the original video file into the static picture file and synthesizing the picture file comprising the video data;
a sending module, configured to send the synthesized picture file; when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file.
A computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring information of a live picture; the information comprises an original video file and a static picture file;
embedding the video data in the original video file into the static picture file, and synthesizing a picture file comprising the video data;
sending the synthesized picture file;
when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring information of a live picture; the information comprises an original video file and a static picture file;
embedding the video data in the original video file into the static picture file, and synthesizing a picture file comprising the video data;
sending the synthesized picture file;
when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file.
According to the picture processing method, the picture processing device, the computer equipment and the storage medium, the video data in the original video file included in the live picture is embedded into the static picture file, and the picture file including the video data is synthesized. And then, sending the synthesized picture file, wherein the synthesized picture file is a single file, namely, only a picture in the form of the single file needs to be sent when the picture is sent, and video transmission capability does not need to be provided. When the picture file is displayed, a video file generated by video data in the picture file can be played; the generated video file conforms to the original video file. That is, in the case where the live pictures are guaranteed to be successfully transmitted, the transmission requirements are reduced, thereby reducing the cost.
A method of picture processing, the method comprising:
receiving a viewing instruction for the picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture;
reading video data from the picture file;
generating a video file according to the video data; the generated video file conforms to the original video file;
and playing the generated video file when the picture file is displayed.
A picture processing apparatus, the apparatus comprising:
the receiving module is used for receiving a viewing instruction aiming at the picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture;
the video generation module is used for reading video data from the picture file; generating a video file according to the video data; the generated video file conforms to the original video file;
and the display module is used for playing the generated video file when the picture file is displayed.
A computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
receiving a viewing instruction for the picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture;
reading video data from the picture file;
generating a video file according to the video data; the generated video file conforms to the original video file;
and playing the generated video file when the picture file is displayed.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the steps of:
receiving a viewing instruction for the picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture;
reading video data from the picture file;
generating a video file according to the video data; the generated video file conforms to the original video file;
and playing the generated video file when the picture file is displayed.
According to the picture processing method, the picture processing device, the computer equipment and the storage medium, when a picture file synthesized by an original video file of a live picture and a static picture file is viewed, video data are read from the picture file; generating a video file according to the video data; the generated video file conforms to the original video file. That is, the original video file and the still picture file of the live picture can be restored by viewing the picture file of the single file, so that the generated video file is played when the picture file is displayed, and the live picture can be viewed. By only receiving the picture files of a single file, the live pictures can be viewed, the requirement is reduced, and the cost is saved.
Drawings
FIG. 1A is a diagram illustrating an exemplary scenario for implementing a method for image processing;
FIG. 1B is a diagram illustrating an exemplary scenario of an image processing method according to another embodiment;
FIG. 2 is a flow chart illustrating a method for processing pictures according to an embodiment;
FIG. 3 is a schematic diagram of an upload interface, under an embodiment;
FIG. 4 is a diagram that illustrates viewing of a picture file at a non-live presentation side, in one embodiment;
FIG. 5 is a diagram illustrating viewing of a picture file at a non-live presenter in accordance with another embodiment;
FIG. 6 is a diagram illustrating uploading of picture files, in one embodiment;
FIG. 7 is a diagram that illustrates viewing of a picture file at a live presentation site, in one embodiment;
FIG. 8 is a block diagram that illustrates a library of live pictures, according to an embodiment;
FIG. 9 is a diagram illustrating a format of a still picture file according to an embodiment;
FIG. 10 is a diagram illustrating a format of a still picture file according to another embodiment;
FIG. 11 is a schematic diagram of a video storage object in one embodiment;
FIG. 12 is a diagram illustrating the structure of a picture file synthesized in one embodiment;
FIG. 13 is a timing diagram of a picture processing method in one embodiment;
FIG. 14 is a flowchart illustrating a method for processing pictures according to an embodiment;
FIG. 15 is a block diagram of a picture processing device in one embodiment;
FIG. 16 is a block diagram of a picture processing apparatus in another embodiment;
FIG. 17 is a block diagram of a picture processing apparatus in a further embodiment;
FIG. 18 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1A is an application scenario diagram of a picture processing method in an embodiment. Referring to fig. 1A, the application scenario includes a first terminal 110, a server 120, and a second terminal 130. Wherein the server 120 establishes connections with the first terminal 110 and the second terminal 130 through the network, respectively. The second terminal 130 is a device having a live-action picture display function. The first terminal 110 and the second terminal 130 may be a smart television, a smart speaker, a desktop computer, or a mobile terminal, and the mobile terminal may include at least one of a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, a wearable device, and the like. The functions of the server 120 may be performed by one server or a cluster of multiple servers.
A user can select a live picture to be uploaded through the first terminal 110, and the first terminal 110 can acquire information of the live picture; the information includes an original video file and a still picture file. The user may upload the live picture based on the first terminal 110, and the first terminal 110 may embed video data in an original video file included in the information of the live picture into a still picture file, so that the original video file and the still picture file are synthesized into a picture file in a single file form, that is, the synthesized picture file includes video data. The first terminal 110 may transmit the synthesized picture file to the server 120. The second terminal 130 may download the synthesized picture file from the server 120 and play a video file generated from video data in the picture file to display in the form of a live picture while displaying the picture file, wherein the generated video file coincides with the original video file. That is, the generated video file is consistent with the original video file.
Fig. 1B is a diagram illustrating an application scenario of a picture processing method according to another embodiment. Referring to fig. 1B, the application scenario may include a first terminal 110 and a second terminal 130 directly connected to a network. The first terminal may directly transmit the picture file including the video data to the second terminal 130 after embedding the video data in the original video file into the still picture file and synthesizing the picture file. The second terminal 130 may play a video file generated from video data in the picture file when the picture file is presented, so as to present in the form of a live picture.
It should be noted that the identities of the first terminal 110 and the second terminal 120 may be interchanged, that is, the first terminal 110 may also receive or download a synthesized picture file and perform the relevant steps of the picture file in the form of a live picture. The second terminal 120 may also perform composition of a picture file including video data.
Fig. 2 is a flowchart illustrating a picture processing method according to an embodiment. The embodiment mainly takes the example that the image processing method is applied to a computer device, and the computer device may be the first terminal 110 in fig. 1. Referring to fig. 2, the method specifically includes the following steps:
s202, acquiring information of a live picture; the information includes an original video file and a still picture file.
Among them, Live Photo (Live Photo), is a file of a special format taken by a camera. That is, a live picture is a file obtained by retaining a video including sound for a certain period of time before and after shooting, on the basis of taking a general photograph. It will be appreciated that if live pictures are viewed at a computer device that supports live viewing, the video may be played while the pictures are being presented.
In one embodiment, the live picture may be taken by an iOS system camera. That is, on the basis of taking a normal photograph using the iOS system camera, a live picture can be obtained by retaining a video including sound for a certain period of time before and after the taking. Among them, the iOS system is a mobile operating system developed by apple inc. It should be noted that, here, the live picture is not limited to being captured by the iOS system camera, and may be referred to as a live picture as long as a file including a video including a sound in a period before and after capturing is retained after capturing a normal picture.
Thus, the information of the live picture includes an original video file and a still picture file. The original video file is the most initial video file included in the information of the live picture. The still picture file is the most initial picture file included in the information of the live picture.
The original video file includes video data. The still picture file includes picture data. It is understood that the still picture file belongs to a still picture that is commonly taken. The video file is a video including sound in a period of time before and after the still picture is taken.
In one embodiment, the format of the still picture file may include at least one of a JPG format (i.e., JPEG, Joint Photographic Experts Group, an image format developed by the Joint Photographic Experts Group and named "ISO 10918-1", JPG being just a trivial name) and a HEIC format. Among them, JPG, jpeg (joint Photographic Experts group), is an image format developed by the joint Photographic Experts group and named "ISO 10918-1", and JPG is only a trivial name. HEIC, a picture storage format based on the HEIF (high Efficiency Image File Format) standard.
In other embodiments, the still picture file may be in other formats, but is not limited thereto.
Specifically, the user may select a live picture to be sent through an interface of the computer device, and the computer device further obtains information of the selected live picture.
In one embodiment, the computer device may display an upload interface, and the user may select a live picture to be uploaded through the upload interface, and the computer device may acquire information of the selected live picture.
It is to be understood that the live picture may be at least one. That is, the user can select a plurality of live pictures at the same time, thereby realizing batch uploading of the plurality of live pictures. The user may also upload a single live picture, which is not limited in this respect.
S204, video data in the original video file is embedded into the static picture file, and the picture file comprising the video data is synthesized.
In one embodiment, step S204 is performed when a transmission instruction for a live picture is received.
The sending instruction is an instruction for triggering sending of a live picture. In one embodiment, the instructions are sent, including an upload instruction for a live picture. It is to be understood that the sending instruction may also be other types of instructions capable of triggering sending of the live picture, and is not limited to the uploading instruction, for example, the sending instruction is sent end to end.
In one embodiment, the computer device may present a path selection entry in the upload interface, and the user may perform a trigger operation on the path selection entry to specify a storage path after the live picture is uploaded. Thereafter, the user may enter an upload instruction for the live picture through the computer device. And the path selection inlet is used for inputting a storage path after the live pictures are uploaded.
In one embodiment, the upload interface may be an interface exposed in a cloud services application for uploading files. The cloud service application is an application program for realizing a cloud service function. Cloud services, i.e., services provided through the cloud. It is understood that the cloud service application can be used to implement functions such as cloud computing and cloud storage. Cloud computing is to convert the traditional computing work into cloud operation based on a network. Cloud storage refers to storing data in a cloud. In one embodiment, the cloud service application may be a cloud disk application. The cloud network disk is an online storage service realized at the cloud end.
In this embodiment, a cloud service application may be installed in the computer device, and a user may open an upload interface of the cloud service application and input a storage path after uploading a live picture through a path selection entry in the upload interface. In one embodiment, an upload control may be exposed in the cloud service application, and when a trigger operation for the upload control is detected, an upload instruction for the live picture is generated.
Fig. 3 is a schematic diagram of an upload interface in an embodiment. Fig. 3 shows an uploading interface provided for the cloud service application. Referring to fig. 3, a cloud service application is illustrated as a micro cloud. In an upload interface provided by the cloud service application, a user may select a live picture 302 as a live picture to be uploaded, and a path selection entry 304 is provided in the upload interface, and the "cloudlet/wylp" is a storage path after uploading the live picture input in the path selection entry 304. The user may trigger the upload control 306 to enter upload instructions for the live picture.
Specifically, the computer device may obtain video data from an original video file and embed the video data into a still picture file, thereby synthesizing the video data and the still picture file into an integral, single-file picture file. It will be appreciated that the embedded video data is included in the resultant picture file.
It can be understood that the video data in the original video file may be embedded into the still picture file as a whole, or the original video file may be split to obtain multiple video data, and then each video data is embedded into the still picture file.
The picture file including video data to be synthesized is different from a compressed file obtained by compressing two files in nature. Because, for compressed files, the two files that are compressed are still two separate and separated files in nature, rather than being combined into a single file as a whole, and the format of the compressed file is unique and completely unrelated to the format of the compressed file. However, in the embodiment of the present application, it is equivalent to combine the original video file and the still picture file into a single file, and the format of the single file retains the format of the original still picture file, so that the combined file is still a picture file.
In one embodiment, the computer device may add video data of the original video file directly at the end of the still picture file, thereby synthesizing a picture file including the video data. In another embodiment, the computer device may also add video data of the original video file to a custom data storage area in the still picture file to synthesize a picture file including the video data.
The custom data storage area is an area which is provided in the format of the static picture file and is specially used for storing custom data of the application. It can be understood that the video data added in the custom data storage area can conform to the standard of the picture file format, so that the compatibility is good, and the video data is not easy to lose in the processing process of the picture file.
It can be understood that the picture file can be synthesized by creating a video tag segment or a video storage data block in the custom data storage area, and adding video data to the video tag segment or the video storage data block. The video mark segment is a mark segment for storing video data. The video storage data block is a data block for storing video data.
S206, sending the synthesized picture file; when the picture file is displayed, a video file generated from video data in the picture file is played.
It will be appreciated that a picture file, when presented, is used to indicate the playing of a video file generated from the video data in the picture file. The picture file can be displayed on the opposite end (i.e. the receiver end) or on the local end (i.e. the end of the computer device sending the picture file).
It should be noted that, since the video data in the picture file is embedded by the video data in the original video file, the video file generated from the video data in the picture file conforms to the original video file.
The picture file is displayed, which may mean that the picture file itself is displayed, or that the picture file from which the video data is separated from the picture file is displayed. Therefore, the picture file can also be used to indicate that video data is separated from the picture file, and when the picture file after the video data is separated is displayed, the video file generated by the separated video data is played.
It is to be understood that the video file generated from the video data in the picture file may be generated when the picture file is displayed or may be generated before the picture file is displayed.
In one embodiment, when the picture file is displayed, the receiver is instructed to generate a video file according to the video data in the picture file, and the generated video file is played. And equivalently, when the picture file is displayed, triggering to generate a video file according to the video data in the picture file, and playing the generated video file while displaying the picture file.
In another embodiment, the picture file is used for instructing the receiving party to generate a video file according to the video data in the picture file and playing the generated video file while displaying the picture file. Equivalently, after receiving the picture file, the receiver generates a video file according to the video data in the picture file, and then directly plays the generated video file when displaying the picture file.
It should be noted that, when the video data is multiple copies, the receiver may splice multiple copies of the video data in sequence, and synthesize the multiple copies of the video data to obtain a video file corresponding to the original video file. It can be understood that multiple pieces of video data are spliced in sequence, that is, the video data are spliced according to the position sequence of the video data in the original video file, so as to generate a video file conforming to the original video file.
It is to be understood that the receiver of the synthesized picture file is not limited, and may be a receiving terminal (such as the second terminal in fig. 1B), or may be a server. Therefore, the computer device can directly send the synthesized picture file to the receiving terminal end-to-end. The computer device may also upload the synthesized picture file to a server.
In one embodiment, when the receiving party is a receiving terminal, the picture file is used for instructing the receiving terminal to play the generated video file when being displayed by the receiving terminal, and the synthesized video file is synthesized by the video data in the picture file. Specifically, the receiving terminal may present the picture file after receiving the synthesized picture file, and play the generated video file while presenting the picture file.
In one embodiment, the receiving terminal may include a live presentation end and a non-live presentation end. The live display end refers to equipment with a live picture display function. I.e. a live presentation side, can be used for presenting live pictures. The non-live display end is a device without a live picture display function. That is, the live pictures cannot be displayed at the non-live display end. Then, when the receiving terminal is a live exhibition terminal, the generated video file can be played while the picture file is exhibited. And when the receiving terminal is a non-live display terminal, displaying the static picture according to the picture data in the picture file instead of displaying the static picture in a live picture mode.
In one embodiment, when the receiving party is a server (i.e., uploading the synthesized picture file to the server), the live presentation terminal may download the picture file from the server and play the generated video file while presenting the picture file. Wherein the generated video file is generated from video data embedded in the picture file. In one embodiment, the non-live presentation terminal may also download the picture file from the server and present the still picture according to the picture data in the picture file.
According to the picture processing method, when a picture file synthesized by an original video file of a live picture and a static picture file is viewed, video data are read from the picture file; generating a video file according to the video data; the generated video file conforms to the original video file. That is, the original video file and the still picture file of the live picture can be restored by viewing the picture file of the single file, so that the generated video file is played when the picture file is displayed, and the live picture can be viewed. By only receiving the picture files of a single file, the live pictures can be viewed, the requirement is reduced, and the cost is saved.
In one embodiment, the step S206 of sending the synthesized picture file includes: and uploading the synthesized picture file. The picture file is used for instructing the live display end to read video data from the picture file after being downloaded by the live display end, generating the video file according to the read video data, and playing the generated video file when the picture file is displayed.
Specifically, the computer device may upload the picture file to a server. The live-action display end can download the picture file from the server, read the video data from the picture file and generate the video file according to the read video data. The live-action display end can display the picture file and play the generated video file when displaying the picture file.
The live presentation end is used to generally refer to a device with a live picture presentation function, that is, a device capable of presenting a live picture, and is not limited to other devices except for a computer device. In fact, the live display end may be the computer device itself, that is, after the user uploads the synthesized picture file through the computer device, the user may also download and view the synthesized picture file through the computer device.
In an embodiment, the uploaded picture file is further used for instructing the non-live display terminal to read picture data included in the picture file and display a still picture corresponding to the picture data after being downloaded by the non-live display terminal.
In one embodiment, the non-live presentation end may include at least one of a Web page (Web), a Personal Computer (PC), an application that does not support a live picture presentation function, and the like.
In one embodiment, the application that does not support the live picture presentation function may be an application that uses a non-iOS system. For example, an application program using the android system does not have a live picture display function at present, so the application program using the android system can be a non-live display terminal, but with the improvement of the subsequent android system function, when the application program has the live picture display function, the application program does not belong to the non-live display terminal.
It is understood that the non-live presenter may also download the picture file from the server. Since the non-live display end does not have the live picture display function, the non-live display end can read picture data included in the picture file and display a still picture corresponding to the picture data.
Although the non-live-action display side cannot display the picture in the form of the live-action picture, the non-live-action display side downloads the picture file including the video data, so that the non-live-action display side only displays the common still picture, but the video data in the picture file is not lost. Therefore, when the picture file is uploaded again through the non-live showing end, the picture file uploaded again can still be shown in a live picture form if the picture file uploaded again is downloaded by the live showing end. Or, when the non-live showing end sends the picture file to the live showing end, the live showing end can still show the picture file including the video data.
Fig. 4 and 5 are schematic diagrams of viewing a picture file at a non-live presentation end in different embodiments, respectively. Referring to fig. 4, an interface diagram on a web page (web) is displayed, on which a thumbnail of the synthesized picture file is displayed as a still picture only, and if a large picture is viewed, the large picture of the still picture is also displayed, and is still a still picture in nature. Referring to fig. 5, an interface for downloading the synthesized picture file to a Personal Computer (PC) is shown, in which 502 is a thumbnail of the synthesized picture file, and 504 is a portion of the content of the displayed still picture, that is, a normal still picture is displayed by downloading to the PC. It will be appreciated that although still pictures are shown on web pages and personal computers, the video data in the picture file is not lost. It should be noted that fig. 4 and 5 show that the synthesized picture file only shows a still picture on a web page and a personal computer which do not have a live picture presentation function, and are not suitable for a web page and a personal computer which have a live picture presentation function by integrating a presentation library.
In the above embodiment, after the synthesized picture file is uploaded and downloaded by the live-action display end, the original video file and the static picture file of the live-action picture can be restored by viewing the picture file of the single file, so that the generated video file is played when the picture file is displayed, and the live-action picture can be viewed. By only receiving the picture files of a single file, the live pictures can be viewed, the requirement is reduced, and the cost is saved.
In one embodiment, the method further comprises: displaying the visual identification of the uploaded picture file in an uploaded file set; the uploaded file set is a set of visual identifications of the uploaded files; and displaying the live picture marks corresponding to the visual identification of the picture file.
The visual identification of the picture file is used for simplified visual display of the picture file.
In one embodiment, the visual identification of the picture file may be a thumbnail of the picture file.
The uploaded file collection is a collection of visual identifications of files uploaded by a computer device. That is, the uploaded file set includes a visual identification of at least one uploaded file.
In one embodiment, the set of uploaded files may be a list of uploaded files.
In one embodiment, a live picture flag is used to identify a live picture. It can be understood that the picture carrying the live picture flag is the live picture.
Specifically, the computer device may present the visual identification of the picture file in the uploaded file collection after uploading the synthesized picture file. The computer device may present a live picture mark corresponding to the visual identification of the picture file.
In one embodiment, the computer device may present the live picture markup at a preset location in the visual identification of the picture file. In another embodiment, the computer device may also display the live picture mark not directly in the visual identifier of the picture file, but in a correlated manner, corresponding to the visual identifier of the picture file (for example, in a row area where the visual identifier of the picture file is located, the live picture mark is displayed, or in a preset area range adjacent to the visual identifier of the picture file, the live picture mark is displayed).
Fig. 6 is a diagram illustrating uploading of a picture file in one embodiment. Referring to fig. 6,602, a thumbnail of a picture file (i.e. a visual identifier of the picture file) is shown, and the word "Live" in 602 is a Live picture mark.
In the above embodiment, the live-action picture mark is displayed corresponding to the visual identifier of the picture file, so that the user can know which live-action pictures are without downloading the picture file, thereby avoiding waste of system resources caused by unnecessary downloading.
In one embodiment, the method further comprises: when a viewing instruction for the uploaded picture file is received, downloading the synthesized picture file; reading video data from the picture file, and generating a video file according to the video data; and playing the generated video file when the picture file is displayed.
Specifically, after the computer device uploads the synthesized picture file, the user may perform an operation of viewing the uploaded picture file through the computer device to input a viewing instruction for the uploaded picture file.
In one embodiment, a user may trigger a visual identification of the picture file presented in the set of uploaded files to enter a viewing instruction for the uploaded picture file. It is understood that the user may also perform a triggering operation through a viewing entry provided by an interface of the computer device to input a viewing instruction for the uploaded picture file.
The computer device may download the synthesized picture file from the server when receiving a viewing instruction for the uploaded picture file. Further, the computer device may read video data from the picture file and generate a video file from the video data. The computer device may present the picture file and play the generated video file while presenting the picture file.
In one embodiment, the computer device may directly separate video data from the picture file and generate a video file from the separated video data. The computer device can display the picture file from which the video data is separated, and play the generated video file when displaying the separated picture file. It can be understood that the picture file after the video data is separated is a static picture file included in the live picture, and the generated video file is an original video file included in the live picture. In this way, the hash values of the picture file from which the video data is separated and the still picture file included in the live-action picture are consistent, and the hash value of the generated video file is also consistent with the hash value of the original video file included in the live-action picture, so that when the picture file from which the video data is separated or the generated video file is backed up, whether the video file is duplicated or not can be judged by directly using the hash value, and if the video file is not duplicated, the video file can be backed up.
In one embodiment, when video data is embedded into a still picture file in the form of a video tag segment, a picture file from which the video data is separated can be obtained by deleting the video tag segment. When the video data is embedded into the still picture file in the form of the video storage data block, the picture file from which the video data is separated can be obtained by deleting the video storage data block.
In another embodiment, the computer device may also read only the video data from the picture file to generate the video file without deleting the video data in the picture file. Therefore, when the picture file is displayed, the static picture can be displayed through the picture data in the picture file, and the generated video file is played. In this case, although the video data in the picture file is not deleted, the effect of displaying a live picture can be achieved.
FIG. 7 is a diagram that illustrates viewing of a picture file at a live presentation side, in one embodiment. The live exhibition end can be a cloud service application (such as a network disk APP) with a live picture exhibition function. As can be seen from fig. 7, since the downloaded picture file has the word "Live" of the Live picture mark, the cloud service application can play the video file when the picture file is displayed.
In the above embodiment, the original video file and the still picture file of the live picture can be restored by viewing the picture file of the single file, so that the generated video file is played when the picture file is displayed, and the live picture can be viewed. By only receiving the picture files of a single file, the live pictures can be viewed, the requirement is reduced, and the cost is saved. In addition, the party who uploads the live pictures also has the function of viewing the live pictures by viewing the picture files of the single file, so that the usability is improved, another application which can support the live picture viewing function does not need to be additionally developed, and the cost is also saved.
In one embodiment, generating a video file from video data comprises: determining the corresponding position of each read video data in an original video file; sequencing the video data according to the sequence of the corresponding positions of the video data from front to back; and sequentially splicing the video data according to the ascending order of the sequence to obtain a video file.
It can be understood that, when the video data embedded in the picture file is multiple, the computer device may determine the corresponding positions of the video data in the original video file, and sequence the video data according to the sequence from the front to the back of the corresponding positions of the video data. Further, the computer device may sequentially stitch the video data according to the ascending order of the sequence to obtain the video file.
For example, the original video file is sequentially divided into 8 pieces of video data P1-P8, and then, during splicing, the video file is generated by splicing according to the sequence of P1-P8. In this way, the generated video file coincides with the original video file.
In the embodiment, the video data are spliced in sequence, and the video file conforming to the original video file can be generated, so that the accuracy of live picture display is ensured. In addition, the original video file is equivalently segmented into a plurality of video data for storage, and compared with the method that the whole video data is required to be stored, the difficulty is reduced, and therefore the cost is saved.
In one embodiment, playing the generated video file while presenting the picture file comprises: decoding the picture file by an image decoder in the integrated live picture presentation library; decoding the generated video file through a video decoder in the display library; screening view components adapted to a local operating system from view components provided in a display library; and when the picture file is displayed through the screened view component, playing the generated video file.
It can be understood that when the computer device has a live picture display function, the computer device can directly play the generated video file when displaying the picture file without integrating a live picture display library. Therefore, in this embodiment, a method for viewing a live picture is provided mainly for a computer device that does not have a live picture display function among native self-contained functions.
In one embodiment, the computer device list using the operating system that cannot support the live picture showing function may be a computer device without the live picture showing function.
The live-action picture display library is a library for realizing display in a live-action picture form. It can be understood that the computer device integrated with the live picture display library can have the live picture display function.
The live picture presentation library may include an image decoder, a video decoder, and a view component adapted to an operating system.
The image decoder is a decoder capable of decoding the picture format of a still picture file in a live-action picture. It will be appreciated that since the synthesized picture file retains the format of the still picture file in the live picture, the image decoder is able to decode the synthesized picture file.
In one embodiment, the picture format of the still picture file in the live picture may include a HEIC format. Then, for a computer device without a live picture display function, the image file in the HEIC format cannot be decoded by its own function, so that the relevant picture decoding process can be performed by the image decoder in the integrated display library.
A video decoder is a decoder capable of decoding the video format of a video file in a live picture. It will be appreciated that if the video file in the live picture is in a common, standard video format, then the video decoder is one having common video decoding capabilities. If the video file in the live picture is in a special format, then the video decoder is a decoder that has the capability to decode the video file in that special format.
It is understood that the view component included in the presentation library is a view component adapted to an operating system that cannot support the live picture presentation function, because, for an operating system (e.g., an iOS system) that can support the live picture presentation function, the operating system itself can support the presentation of the live picture, and therefore, the view component in the presentation library does not need to be integrated.
In one embodiment, at least one operating system adapted view component may be included in the presentation library of live pictures. The operating systems may include the android operating system and the microsoft windows operating system (windows operating system), among others. Therefore, a view component adapted with an android operating system and a view component adapted with a Microsoft Windows operating system can be included in the live picture presentation library. The presentation library may further include a view component adapted to other operating systems, which is not limited to this.
In particular, a presentation library of live pictures may be pre-integrated in the computer device. The computer equipment can read the video data from the picture file through the display library and generate a video file according to the video data.
In one embodiment, the computer device may separate the video data directly from the picture file through the presentation library, such that the video data in the picture file is deleted as a result of being separated, leaving only the original picture data. The computer device may also read only the video data through the presentation library without deleting the read video data from the picture file.
Further, the computer device may decode the downloaded picture file according to an image decoder in the presentation library, and decode the generated video file through a video decoder in the presentation library.
It is to be understood that when video data is separated from a picture file, the picture file after the video data is separated is decoded by the computer device according to the image decoder in the presentation library.
The computer device may determine a local operating system. The local operating system is an operating system used by the computer device itself. The computer device may filter view components adapted to the local operating system from view components provided in the presentation library. The computer equipment can display the picture file through the screened view component, and play the generated video file through the view component while displaying the picture file.
In one embodiment, the view component includes an image view component and a video playback component. The computer equipment can display the picture file through the image view component and play the generated video file through the video playing component while displaying the picture file.
In one embodiment, reading video data from a picture file and generating a video file from the video data comprises: separating the video data from the downloaded picture file through a separation tool in the display library; and generating a video file according to the separated video data. In this embodiment, when displaying the picture file through the screened view component, playing the generated video file includes: and when the picture file with the video data separated out is displayed through the screened view component, playing the generated video file.
In this embodiment, the display library may further include a separation tool. The separation tool is used for separating the video data and generating a video file. Namely, the separation tool is configured to separate video data from a picture file and generate a video file from the separated video data.
In one embodiment, when the separated video data is multiple copies, the computer device may determine, by the separation tool, a corresponding position of each video data in the original video file, respectively; sequencing the video data according to the sequence of the corresponding positions of the video data from front to back; and sequentially splicing the video data according to the ascending order of the sequence to obtain a video file.
The computer equipment can display the picture file with the video data separated through the screened view component, and play the generated video file when displaying the picture file. It will be appreciated that the picture file from which the video data is separated no longer includes video data, and therefore, coincides with a still picture file included in a live picture.
In one embodiment, the computer device may display the picture file from which the video data is separated by the image view component, and play the generated video file through the video playing component while displaying the picture file.
It can be understood that, since the view component adapted to different operating systems and the unified decoder capable of decoding the picture file and the video file of the live picture can be included in the presentation library, the presentation library can achieve the purpose of cross-platform presentation of the live picture. Cross-platform, i.e., meaning can be adapted to different operating system platforms.
FIG. 8 is a block diagram of a live picture presentation library in one embodiment. Referring to FIG. 8, the bottom layer is a cross-platform library of authoring. It should be noted that fig. 8 is described by taking an example of writing a presentation library in a programming language that can be used by the android system platform and the Windows system platform, so only view components matching with the android system and the Windows system are shown in the upper layer in fig. 8. Referring to fig. 8, a separation tool includes the ability to separate a composite picture file into a video file and a picture file, where the separated video file corresponds to an original video file of a live picture and the separated picture file corresponds to an original still picture file. A universal video decoder for decoding universal, standard format video files. An image decoder is a decoder capable of decoding a still picture file of a live picture. The image decoder may be a hec image decoder. The HEIC image decoder is a light-weight decoding library of the HEIC format pictures and is used for decoding the HTIC format picture files. Because, because the HEIC is a relatively new picture format, the operating system of some non-iOS platforms cannot yet support the presentation of HEIC pictures, the image decoder provided in fig. 8 is required to provide decoding capability. The upper layer is a view component written for different operating system platforms respectively, such as a view component written for an android system platform and a view component written for a Windows system platform. The interactive logic and visual effects included in the view components of each platform may be consistent with those provided by the native operating system to which the live pictures correspond. For example, the native operating system of the live picture is an iOS system, and the interactive logic and visual effects included in the view components of each platform are consistent with the native experience of the iOS system (e.g., a fade-in and fade-out effect when a video is played by long press).
It is understood that the isolation tool may be a wylivephotoskit. The wylivephotoskit is a tool provided by the cloudlet and used for separating an original video file and a picture file from the picture file.
The principle of live picture presentation is now described in connection with fig. 8. After the picture file is downloaded by the computer equipment, video data can be separated from the picture file through the integrated separation tool in the picture 8, the video file is generated according to the video data, the generated video file is input into a video decoder to be decoded, the picture file with the video data separated is input into an image decoder to be decoded, the decoded video file and the picture file are transmitted to a view component adaptive to a local operating system, the decoded picture file is displayed through the image view component in the view component, and the decoded video file is played through a video playing component while the picture file is displayed, so that the display processing of a live picture is realized.
It should be noted that fig. 8 is only an example, and is not limited thereto. For example, since the web can only use code written in Javascript, it is not suitable for the cross-platform presentation library shown in fig. 8. However, the logic of the presentation library of FIG. 8 can be written using the Javascript language to generate a set of presentation libraries for live pictures for the web. In this way, the live picture presentation library is integrated in the web, and the live picture presentation can be realized. It should be noted that, because the web browser has its own video decoding capability, when the Javascript language is used to write the presentation library of the live picture, the web browser may use its own video decoding capability instead of writing a set of special video decoder.
It is understood that a set of general live picture display libraries may be written according to the logic and architecture shown in fig. 8, using a set of languages that can both use the web and be suitable for platforms such as android and Windows, without limitation.
It should be noted that fig. 8 is mainly used to illustrate that cross-platform display of a live picture can be realized by integrating a live picture display library, where the display library only needs to have video data separation capability, capability of synthesizing a video file, capability of encoding and decoding the video file and the picture file of the live picture, and a display view component adapted to different platforms, and is not limited to the structure shown in fig. 8. On the basis of fig. 8, the architecture of the display library may be changed accordingly according to actual situations, for example, functions are merged and split, and components are replaced.
In the embodiment, the display library of the live pictures is integrated, so that equipment which does not originally have a live picture viewing function can be displayed in a live picture mode, and cross-platform display of the live pictures is realized.
In one embodiment, step S204 includes: determining a custom data storage area in the static picture file; storing video data of an original video file in a video storage object of a user-defined data storage area to obtain a picture file comprising the video data; wherein the video storage object comprises a live picture identifier; a live picture identifier for indicating that the video data stored by the video storage object belongs to the video data included in the live picture.
The custom data storage area is an area which is provided in the format of the static picture file and is specially used for storing custom data of the application. The video storage object is an object for storing video data.
It will be appreciated that the location of the custom data storage area may be different and the type of video storage object may be different for still picture files of different formats.
In one embodiment, when the format of the still picture file is JPG, then the custom data storage region may precede the compressed image data.
FIG. 9 is a diagram illustrating a format of a still picture file according to an embodiment. Fig. 9 is a schematic diagram of the JPG format. Referring to fig. 9, the custom data storage region is located in an additional markup segment region, i.e., may be located before a storage region of compressed image data. It will be appreciated that the additional tagged segment area, i.e., the defined APPn tagged segment, is used to store custom data for the application.
In one embodiment, the custom data store, when the format of the still picture file is HEIC, may be located at the end.
Fig. 10 is a diagram illustrating a format of a still picture file in another embodiment. Fig. 10 is a schematic diagram of the HEIC format. Referring to FIG. 10, the custom data store is located after mdat. Each block in FIG. 10 represents a data block (box), for example, data blocks of types including "ftyp", "moov", "meta", and "mdat". "ftyp" is a flag as the HEIC and contains some file information. "moov" is used to store the image sequence, specifically, a single frame image is stored by the sub data block trak. Metadata, such as "Item info" base description, "Item location" location, and "other Item-specific metadata" are included in the data block of "meta". The data block of "mdat" includes media data, for example, "Item' Encoded bitstream" is an Encoded bitstream.
Specifically, the computer device may determine the custom data storage area from the still picture file according to the format of the still picture file. The computer device can create a video storage object in the custom data storage area, and store video data of the original video file in the video storage object to obtain a picture file comprising the video data.
In one embodiment, when the format of the still picture file is JPG, the video storage object may be a video markup segment.
The video mark segment is a mark segment for storing video data. A videomark segment may include default header information, length, live picture identifier, sequence number, and video data. A live picture identifier for indicating that the video data stored by the video storage object belongs to the video data included in the live picture.
In one embodiment, when the format of the still picture file is HEIC, the video storage object may be a video storage data block. In this embodiment, in the video storage object in the custom data storage area, storing the video data of the original video file includes: creating a data block in a custom data storage area; in the data storage area of the created data block, video data of the original video file is stored.
The video storage data block is a data block for storing video data. The video storage data block may include a live picture identifier serving as header information, metadata information, and video data. The metadata information includes the number of metadata character strings and specific character strings representing metadata.
The description about the data block (Box) is as follows:
Figure BDA0002088456880000241
this description is to be understood as meaning that the size field indicates the size of the data block, the type field indicates the type of the data block, and the 4 characters (32 bits) at the beginning of the data block (box) are the size of the data block (box size) including the header of the data block (box header) and the body of the data block (box body) as a whole, so that we can locate each data block in the file. If the size is 1, it indicates that the size of this data block is large size, and the true size value is to be obtained in the large size domain. If the size is 0, the data block is the last data block of the file, and the end of the file is the end of the data block. The 32 bits following the size of the data block are the type of the data block (box type), which is generally 4 characters, and usually, the type of the data block (box type) is predefined, which respectively indicates a fixed meaning. If the type (box type) of the data block is "uuid", it means that the data block (box) is a user extension type.
FIG. 11 is a diagram of a video storage object in one embodiment. Fig. 11 (a) shows the structure of one video marker segment, and the example is described with reference to the APP4 marker segment. The video tag segment is started by a code FFE4 (i.e., 0xFF,0xE4 in the figure), followed by two bytes to indicate the length, and followed by a Live picture identifier, i.e., a zero-knot character string "Weiyun Live Photo \ 0" to indicate that the tag segment is used by the micro cloud platform to store video data in the Live picture. The zero-node character string is a character string ending with zero. The next byte is used for recording the sequence number of the video mark segment, and the rest is the carried video data. Fig. 11 (b) is a schematic diagram of a video storage data block. The header information "wylp", which is a live picture identifier, is used to indicate that this data block is used to store video data in a live picture. "Meta Data Count 0x 02" indicates the number of metadata character strings, "version \ 0" and "1.0 \ 0" and other zero-node character strings are used to indicate specific metadata, respectively, "version" is the version, and the remaining content "Video File Data" is the Video Data.
Fig. 12 is a schematic structural diagram of a picture file synthesized in one embodiment. Fig. 12 (a) shows a structure of a picture file composed after storing video data in a video tag section of a still picture file in JPG format. The original video file 1202 of Live picture Live photo is divided into N pieces of video data (part 1-part N), and then the video data are sequentially added to video mark segments of idx 1-idxn according to the sequence of the positions of the video data in the original video file to synthesize a picture file. idx0 is used to record metadata (i.e., Meta Data). It is understood that recording the metadata with the video tag segment of idx0 facilitates subsequent upgrade, that is, the metadata can be used to update the metadata after subsequent upgrade (for example, updated version information of the metadata, etc.), and the metadata can be stored in a form of a plurality of zero-knot character strings forming Key-Value pairs. idx1 idxn is used to record substantive video data. For example, part1 is added to the video marker segment corresponding to indx1 and part n is added to the video marker segment corresponding to idxN. It is understood that 1204 in fig. 12 is the format structure of the synthesized picture file. Note that, the detailed structure of the video tag segment corresponding to each idx is as shown in fig. 11 (a). The part from the head to the "serial number" in FIG. 11 (a) is collectively referred to as the fragment Header information (i.e., Live Photo Segment: Header) in FIG. 12 (a). Equivalently, on the basis of the structure of the JPG format shown in fig. 9, a plurality of video tag sections as shown in (a) of fig. 11 are created in front of the area of compressed image data recorded in a still picture file, and then the sliced video data is sequentially added to the corresponding video tag sections, one of which is used for recording metadata, thereby synthesizing a picture file.
Fig. 12 (b) illustrates a structure of a picture file composed after storing video data in a video storage data block of a still picture file in the HEIC format. The data storage capacity of a single data block is large, so that all video data in an original video file can be uniformly stored in one video storage data block. Equivalently, on the basis of the configuration of the HEIC format shown in fig. 10, at the end of the still picture file, a custom video storage data block as shown in fig. 11 (b) is created, and then video data is stored in the data storage area of the video storage data block, that is, the synthesized picture file is obtained. For the sake of simplicity, fig. 12 (b) does not show the contents of the "moov" data block in fig. 11 (b), and actually, the structure of the synthesized picture file includes the contents of the "moov" data block, which is not shown.
In the above embodiment, the video data is stored in the video storage object of the custom data storage area in the still picture file, and the picture file is synthesized, so that the synthesized picture file can meet the picture standard, and the video data is not easy to lose. In addition, the video storage object comprises the live picture identifier, so that the live picture identifier can be distinguished from other stored data, and is not easy to conflict with data added by other software, and thus abnormal behaviors of other software are not easy to generate.
In one embodiment, the video memory object includes a videomark segment. In the video storage object of the custom data storage area, storing the video data of the original video file comprises: when the size of the original video file exceeds the data storage capacity of a single video mark segment, segmenting the original video file according to the data storage capacity; the size of each video data obtained by segmentation is smaller than or equal to the data storage capacity; and adding the segmented video data into the data storage area of each video marking segment.
The data storage capacity of a single video mark segment refers to the amount of data that can be stored in the single video mark segment. The data storage area refers to an area for storing substantial data in the video marker segment.
Specifically, the computer device may compare the size of the original video file with the data storage capacity of a single video tag segment, and when the size is smaller than or equal to the data storage capacity, the single video tag segment indicates that the video data of the original video file can be stored, and the video data of the original video file can be directly stored into the single video tag segment.
However, in general, since the data that the videomark segment can store is limited, and the entire video data of the original video file cannot be stored, a plurality of videomark segments are required to be used for storage at this time. Thus, when the size of the original video file exceeds (i.e., is greater than) the data storage capacity of a single videomark segment, the computer device slices the original video file according to the data storage capacity; the size of each video data obtained by segmentation is smaller than or equal to the data storage capacity; and adding the segmented video data into the data storage area of each video marking segment.
In one embodiment, the computer device may equally divide the original video file by data storage capacity. In other embodiments, the division may not be equal as long as it is satisfied that a single piece of video data can be stored by a single videomark segment.
In one embodiment, the computer device may add corresponding video data to the data storage area of each video tag segment, and add corresponding sequence numbers to the video tag segments according to the sequence of the positions of the video data in the original video file. The size relation between the corresponding serial numbers of the video data in the video marking segments is consistent with the sequence of the positions of the video data in the original video file.
In another embodiment, the computer device may add a corresponding sequence number to each videomark segment when the videomark segment is created, and then, when video data is added to the videomark segment, each video data may be sequentially added to the corresponding videomark segment according to the added sequence number in the videomark segment. The size relation between the serial numbers of the video mark segments to which the video data are added is consistent with the sequence of the positions of the video data in the original video file.
In the above embodiment, the video tag segment is used as a video storage object, and can meet the picture standard. In addition, the original video file is divided into a plurality of video data and stored in each video marking segment, so that the storage requirement can be reduced, and the cost can be saved.
FIG. 13 is a timing diagram illustrating a method for processing pictures in one embodiment. Referring to fig. 13, the timing chart includes: first terminal, server, second terminal and third terminal, wherein:
1) selecting a live picture to be uploaded by a user;
2) the method comprises the steps that a first terminal obtains information of a live picture selected by a user; the information comprises an original video file and a static picture file;
3) the method comprises the steps that a first terminal receives an uploading instruction of a user for a live picture;
4) the method comprises the steps that a first terminal determines a custom data storage area in a static picture file, and divides an original video file according to the data storage capacity of a single video mark segment in the custom data storage area; wherein the videomark segment includes a live picture identifier.
5) The first terminal adds the video data obtained by segmentation in the data storage area of each video marking segment to obtain a picture file comprising the video data;
6) and the first terminal uploads the synthesized picture file to the server.
7) The first terminal displays the visual identification of the uploaded picture file in an uploaded file set; the uploaded file set is a set of visual identifications of the uploaded files; and displaying the live picture marks corresponding to the visual identification of the picture file.
It is understood that both the second terminal and the third terminal may download the synthesized picture file from the server. To illustrate different scenarios, the second terminal is considered as a device with a live picture showing function, and the third terminal is considered as a device without a live picture showing function.
8) The second terminal and the third terminal may download the synthesized picture file from the server, respectively.
9) The second terminal may read a videomark segment including the live picture identifier from the picture file, and read the video data and corresponding sequence number from a data store of the videomark segment, via a separate tool in an integrated live picture presentation library.
10) The second terminal can splice the read video data according to the ascending sequence of the sequence numbers to generate a video file, and delete the video mark segment in the picture file.
11) The second terminal can decode the picture file through an image decoder in the display library and decode the generated video file through a video decoder in the display library.
12) The second terminal can filter view components matched with the local operating system from view components provided in the display library; and when the picture file after the video mark segment is deleted is displayed through the screened view component, playing the generated video file.
13) And the third terminal reads the picture data included in the synthesized picture file and displays the static picture corresponding to the picture data.
It should be noted that the first terminal may also serve as a second terminal or a third terminal, and when the first terminal has the live picture display function, the steps executed by the second terminal may also be executed, and when the first terminal does not have the live picture display function, the steps executed by the third terminal may be executed.
In addition, fig. 13 only illustrates that the picture is in the JPG format, and when the picture is in the HEIC format, the step 3) may be replaced with: the first terminal determines a custom data storage area in the static picture file, creates a video storage data block in the custom data storage area, and replaces the step 4) with the following steps: in the data storage area of the created data block, video data of the original video file is stored. And replacing step 8) with a second terminal which can determine a video storage data block including the live picture identifier from the picture file by a separate tool in the integrated live picture presentation library, and replacing step 9) with a second terminal which can read video data from the video storage data block, generate a video file according to the read video data, and delete the video storage data block. And replacing the 'video mark segment' in the step 11) with 'video storage data block'.
As shown in fig. 14, in an embodiment, another picture processing method is provided, and the method may be applied to a computer device, where the computer device may be the second terminal shown in fig. 1A or 1B, and the method specifically includes the following steps:
s1402, receiving a viewing instruction for the picture file; the picture file is synthesized by embedding video data of an original video file into the static picture file; still picture files and original video files, which belong to the information of live pictures.
The viewing instruction is an instruction for viewing the picture file.
It is to be understood that the viewing instruction for the acquired picture file may be received after the computer device acquires the picture file, or the picture file may be acquired (e.g., downloaded) through the viewing instruction and the viewing process may be performed. This is not limitative.
In one embodiment, a computer device may receive a viewing instruction for a picture file and download the picture file for which the viewing instruction is directed.
S1404, reading video data from the picture file.
In one embodiment, step S1404 includes: and screening a video storage object comprising the live picture identifier from the custom data storage area of the picture file, and reading video data from the data storage area of the screened video storage object.
In one embodiment, the video memory object includes at least one of a video tag segment and a video memory data block.
In one embodiment, the computer device may separate the video data from the data store of the screened video storage object by exposing a separation tool in the library.
S1406 generates a video file from the video data.
Wherein the generated video file conforms to the original video file.
In one embodiment, when there are multiple pieces of video data, the step S1406 of generating a video file from the video data includes: determining the corresponding position of each read video data in an original video file; sequencing the video data according to the sequence of the corresponding positions of the video data from front to back; and sequentially splicing the video data according to the ascending order of the sequence to obtain a video file.
In one embodiment, multiple copies of video data are stored in a data store of a corresponding video storage object, which may be a videomark segment. In this case, the computer device may sort the video data stored in the data storage area of each sorted video tag segment in an ascending order according to the sequence number in each video tag segment, and splice the video data stored in the data storage area of each sorted video tag segment in an ascending order according to the sequence to obtain the video file.
In one embodiment, the video data in step S1406 may be separated from the picture file. The computer device may delete the video storage object from the picture file after generating the video file from the separated video data.
S1408, playing the generated video file when the picture file is displayed.
In one embodiment, step S1408 includes: decoding the picture file by an image decoder in the integrated live picture presentation library; decoding the generated video file through a video decoder in the display library; screening view components adapted to a local operating system from view components provided in a display library; and when the picture file is displayed through the screened view component, playing the generated video file.
In one embodiment, the picture file displayed in step S1408 may be a picture file after separating the video data, and then the computer device may play the generated video file while displaying the picture file after separating the video data through the screened view component. In one embodiment, the picture file after the video data is separated may be a picture file after the video storage object is deleted.
According to the picture processing method, when a picture file synthesized by an original video file of a live picture and a static picture file is viewed, video data are read from the picture file; generating a video file according to the video data; the generated video file conforms to the original video file. That is, the original video file and the still picture file of the live picture can be restored by viewing the picture file of the single file, so that the generated video file is played when the picture file is displayed, and the live picture can be viewed. By only receiving the picture files of a single file, the live pictures can be viewed, the requirement is reduced, and the cost is saved.
As shown in fig. 15, in one embodiment, there is provided a picture processing apparatus 1500, the apparatus 1500 including: an obtaining module 1502, a synthesizing module 1504, and a sending module 1506, wherein:
an obtaining module 1502 for obtaining information of a live picture; the information includes an original video file and a still picture file.
A synthesizing module 1504, configured to embed the video data in the original video file into the still picture file, and synthesize a picture file including the video data.
A sending module 1506, configured to send the synthesized picture file; when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file.
In one embodiment, the sending module 1506 is further configured to upload the synthesized picture file; the picture file is used for instructing the live display end to read video data from the picture file after being downloaded by the live display end, generating the video file according to the read video data, and playing the generated video file when the picture file is displayed.
In one embodiment, the apparatus further comprises:
a display module 1508, configured to display the visual identifier of the uploaded picture file in the uploaded file set; the uploaded file set is a set of visual identifications of the uploaded files; and displaying the live picture marks corresponding to the visual identification of the picture file.
As shown in fig. 16, in one embodiment, the apparatus comprises: an obtaining module 1502, a synthesizing module 1504, a sending module 1506, a presentation module 1508, a downloading module 1510, and a video generating module 1512, wherein:
a download module 1510, configured to download the synthesized picture file when receiving a viewing instruction for the uploaded picture file.
The video generating module 1512 is configured to read video data from the picture file, and generate a video file according to the video data.
The presentation module 1508 is also used to play the generated video file when presenting the picture file.
In one embodiment, the video generation module 1512 is further configured to determine a corresponding position of each read video data in the original video file; sequencing the video data according to the sequence of the corresponding positions of the video data from front to back; and sequentially splicing the video data according to the ascending order of the sequence to obtain a video file.
In one embodiment, presentation module 1508 is also used to decode the picture file through an image decoder in the presentation library of integrated live pictures; decoding the generated video file through a video decoder in the display library; screening view components adapted to a local operating system from view components provided in a display library; and when the picture file is displayed through the screened view component, playing the generated video file.
In one embodiment, the video generation module 1512 is further configured to separate video data from the downloaded picture file by a separation tool in the presentation library; and generating a video file according to the separated video data. In this embodiment, the displaying module 1508 is further configured to play the generated video file when the picture file after the video data is separated is displayed through the screened view component.
In an embodiment, the uploaded picture file is further used for instructing the non-live display terminal to read picture data included in the picture file and display a still picture corresponding to the picture data after being downloaded by the non-live display terminal.
In one embodiment, the composition module 1504 is also used to determine a custom data store in the still picture file; storing video data of an original video file in a video storage object of a user-defined data storage area to obtain a picture file comprising the video data; wherein the video storage object comprises a live picture identifier; a live picture identifier for indicating that the video data stored by the video storage object belongs to the video data included in the live picture.
In one embodiment, the video memory object includes a videomark segment. The composition module 1504 is further configured to segment the original video file according to the data storage capacity when the size of the original video file exceeds the data storage capacity of a single video tag segment; the size of each video data obtained by segmentation is smaller than or equal to the data storage capacity; and adding the segmented video data into the data storage area of each video marking segment.
In one embodiment, the video memory object comprises a block of data. The composition module 1504 is further configured to create a video storage data block in the custom data storage area; in the data storage area of the created data block, video data of the original video file is stored.
As shown in fig. 17, in one embodiment, another picture processing apparatus 1700 is provided, the apparatus 1700 including: a receiving module 1702, a video generating module 1704, and a presenting module 1706, wherein:
a receiving module 1702, configured to receive a viewing instruction for a picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture.
A video generation module 1704, configured to read video data from the picture file; generating a video file according to the video data; the generated video file conforms to the original video file.
A display module 1706, configured to play the generated video file when displaying the picture file.
FIG. 18 is a diagram showing an internal configuration of a computer device according to an embodiment. Referring to fig. 18, the computer device may be the first terminal or the second terminal in fig. 1A or 1B. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device may store an operating system and a computer program. The computer program, when executed, may cause a processor to perform a picture processing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The internal memory stores a computer program, which when executed by the processor causes the processor to perform a method of processing pictures. The network interface of the computer device is used for network communication. The display screen of the computer device can be a liquid crystal display screen or an electronic ink display screen. The input device of the computer equipment can be a touch layer covered on a display screen, a key, a track ball or a touch pad arranged on a terminal shell, an external keyboard, a touch pad or a mouse and the like. The computer device may be a personal computer, a smart speaker, a mobile terminal or a vehicle-mounted device, and the mobile terminal includes at least one of a mobile phone, a tablet computer, a personal digital assistant or a wearable device.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the image processing apparatus provided in the present application may be implemented in a form of a computer program, the computer program may be run on a computer device shown in fig. 18, and the non-volatile storage medium of the computer device may store various program modules constituting the image processing apparatus, such as the obtaining module 1502, the synthesizing module 1504, and the sending module 1506 shown in fig. 15, or the receiving module 1702, the video generating module 1704, and the displaying module 1706 shown in fig. 17. A computer program composed of program modules is used for causing the computer device to execute the steps in the picture processing method according to the embodiments of the present application described in this specification, and is now described by taking the picture processing apparatus 1500 in fig. 15 as an example, for example, the computer device may acquire information of a live picture through the acquisition module 1502 in the picture processing apparatus 1500 shown in fig. 15; the information includes an original video file and a still picture file. The computer device may embed video data in the original video file into the still picture file via the composition module 1504, and compose a picture file including the video data. The computer device may send the synthesized picture file through the sending module 1506; when the picture file is displayed, the video file generated by the video data in the picture file is indicated to be played; the generated video file conforms to the original video file.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above-described picture processing method. Here, the steps of the picture processing method may be steps in the picture processing methods of the above-described embodiments.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, which, when executed by a processor, causes the processor to perform the steps of the above-described picture processing method. Here, the steps of the picture processing method may be steps in the picture processing methods of the above-described embodiments.
It should be noted that "first", "second", and "third" in the embodiments of the present application are used for distinction only, and are not used for limitation in terms of size, order, dependency, and the like.
It should be understood that although the steps in the embodiments of the present application are not necessarily performed in the order indicated by the step numbers. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A method of picture processing, the method comprising:
acquiring information of a live picture; the information comprises an original video file and a static picture file; the live picture is a file obtained by reserving videos including sound in a period of time before and after the picture is shot on the basis of shooting a common picture when the picture is shot;
when an uploading instruction aiming at a live picture is received, triggering to embed the video data in the original video file into the static picture file, and synthesizing a picture file comprising the video data;
uploading the synthesized picture file;
when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file;
when a viewing instruction for the uploaded picture file is received, downloading the synthesized picture file;
reading video data from the picture file, and generating a video file according to the video data;
decoding the picture file by an image decoder in a presentation library of integrated live pictures;
decoding, by a video decoder in the presentation library, the generated video file;
screening view components adapted to a local operating system from view components adapted to different operating systems provided in the display library;
and when the picture file is displayed through the screened view component, playing the generated video file.
2. The method according to claim 1, wherein the picture file is used for instructing a live exhibition terminal to read video data from the picture file after being downloaded by the live exhibition terminal, generating a video file according to the read video data, and playing the generated video file when the picture file is exhibited.
3. The method of claim 2, further comprising:
displaying the visual identification of the uploaded picture file in an uploaded file set; the uploaded file set is a set of visual identifications of uploaded files;
and displaying a live picture mark corresponding to the visual identification of the picture file.
4. The method of claim 1, wherein generating a video file from the video data comprises:
determining the corresponding position of each read video data in the original video file;
sequencing the video data according to the sequence of the corresponding positions of the video data from front to back;
and sequentially splicing the video data according to the ascending order of the sequence to obtain a video file.
5. The method of claim 1, wherein reading video data from the picture file and generating a video file from the video data comprises:
separating video data from the downloaded picture file through a separation tool in the display library;
generating a video file according to the separated video data;
when the picture file is displayed through the screened view component, playing the generated video file comprises:
and when the picture file with the video data separated out is displayed through the screened view component, playing the generated video file.
6. The method according to claim 1, wherein the uploaded picture file is further used for instructing, after being downloaded by a non-live presentation end, the non-live presentation end to read picture data included in the picture file and present a still picture corresponding to the picture data.
7. The method according to any one of claims 1 to 6, wherein said embedding video data in said original video file into said still picture file, and wherein synthesizing a picture file comprising video data comprises:
determining a custom data storage area in the static picture file;
storing the video data of the original video file in the video storage object of the user-defined data storage area to obtain a picture file comprising the video data;
wherein the video storage object comprises a live picture identifier; the live picture identifier is used for indicating that the video data stored by the video storage object belong to the video data included in the live picture.
8. The method of claim 7, wherein the video memory object comprises a videomark segment; in the video storage object of the custom data storage area, storing the video data of the original video file comprises:
when the size of the original video file exceeds the data storage capacity of a single videomark segment, then
Segmenting the original video file according to the data storage capacity; the size of each video data obtained by segmentation is smaller than or equal to the data storage capacity;
and adding the video data obtained by segmentation into the data storage area of each video marking segment.
9. A method of picture processing, the method comprising:
receiving a viewing instruction for the picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture; the live picture is a file obtained by reserving videos including sound in a period of time before and after the picture is shot on the basis of shooting a common picture when the picture is shot;
reading video data from the picture file;
generating a video file according to the video data; the generated video file conforms to the original video file;
decoding the picture file by an image decoder in a presentation library of integrated live pictures;
decoding, by a video decoder in the presentation library, the generated video file;
screening view components adapted to a local operating system from view components adapted to different operating systems provided in the display library;
and playing the generated video file when the picture file is displayed through the screened view component.
10. A picture processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring information of a live picture; the information comprises an original video file and a static picture file; the live picture is a file obtained by reserving videos including sound in a period of time before and after the picture is shot on the basis of shooting a common picture when the picture is shot;
the synthesis module is used for triggering the video data in the original video file to be embedded into the static picture file and synthesizing the picture file comprising the video data when an uploading instruction aiming at a live picture is received;
the sending module is used for uploading the synthesized picture file; when the picture file is displayed, playing a video file generated by video data in the picture file; the generated video file conforms to the original video file;
the downloading module is used for downloading the synthesized picture file when receiving a viewing instruction aiming at the uploaded picture file;
the video generation module is used for reading video data from the picture file and generating a video file according to the video data;
a presentation module to decode the picture file through an image decoder in a presentation library of integrated live pictures; decoding, by a video decoder in the presentation library, the generated video file; screening view components adapted to a local operating system from view components adapted to different operating systems provided in the display library; and when the picture file is displayed through the screened view component, playing the generated video file.
11. A picture processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a viewing instruction aiming at the picture file; the picture file is obtained by embedding video data of an original video file into a static picture file and then synthesizing; the static picture file and the original video file belong to information of a live picture; the live picture is a file obtained by reserving videos including sound in a period of time before and after the picture is shot on the basis of shooting a common picture when the picture is shot;
the video generation module is used for reading video data from the picture file; generating a video file according to the video data; the generated video file conforms to the original video file;
a presentation module for decoding the picture file by an image decoder in a presentation library of integrated live pictures; decoding the generated video file through a video decoder in the display library; screening view components adapted to a local operating system from view components provided in a display library; and when the picture file is displayed through the screened view component, playing the generated video file.
12. A computer arrangement comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the method of any one of claims 1 to 9.
13. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1 to 9.
CN201910495571.XA 2019-06-10 2019-06-10 Picture processing method and device, computer equipment and storage medium Active CN110248116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910495571.XA CN110248116B (en) 2019-06-10 2019-06-10 Picture processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910495571.XA CN110248116B (en) 2019-06-10 2019-06-10 Picture processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110248116A CN110248116A (en) 2019-09-17
CN110248116B true CN110248116B (en) 2021-10-26

Family

ID=67886278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910495571.XA Active CN110248116B (en) 2019-06-10 2019-06-10 Picture processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110248116B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7442302B2 (en) * 2019-11-22 2024-03-04 キヤノン株式会社 Data processing device, its control method, and program
CN110990626A (en) * 2019-12-09 2020-04-10 深圳市迅雷网络技术有限公司 Picture processing method, device and system and storage medium
CN111189460B (en) * 2019-12-31 2022-08-23 广州展讯信息科技有限公司 Video synthesis conversion method and device containing high-precision map track
CN112995536A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video synthesis method and system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523410A (en) * 2011-12-28 2012-06-27 创新科存储技术(深圳)有限公司 Method for writing video data and video data storage equipment
CN103646048A (en) * 2013-11-25 2014-03-19 宇龙计算机通信科技(深圳)有限公司 Method and device for achieving multimedia pictures
JP2014123855A (en) * 2012-12-20 2014-07-03 Panasonic Corp Synthesized image generation device, synthesized image generation method and broadcast receiver
CN104125388A (en) * 2013-04-25 2014-10-29 广州华多网络科技有限公司 Method for shooting and storing photos and device thereof
CN105187911A (en) * 2015-09-28 2015-12-23 努比亚技术有限公司 Method and device for displaying video image and image display method
CN105245777A (en) * 2015-09-28 2016-01-13 努比亚技术有限公司 Method and device for generating video image
CN105354219A (en) * 2015-09-28 2016-02-24 努比亚技术有限公司 File encoding method and apparatus
CN106686298A (en) * 2016-11-29 2017-05-17 努比亚技术有限公司 Post-shooting processing method, post-shooting processing device and mobile terminal
CN106791361A (en) * 2016-11-22 2017-05-31 上海斐讯数据通信技术有限公司 An action shot display methods and system based on pressure sensitivity Touch Screen
CN107644056A (en) * 2017-08-04 2018-01-30 武汉烽火众智数字技术有限责任公司 A kind of file memory method, apparatus and system
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN109361880A (en) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 A kind of method and system showing the corresponding dynamic picture of static images or video
CN109587196A (en) * 2017-09-29 2019-04-05 南京云照乐摄影有限公司 A kind of management system fast image processing and upload is facilitated to download

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7697040B2 (en) * 2005-10-31 2010-04-13 Lightbox Network, Inc. Method for digital photo management and distribution
US8681234B2 (en) * 2010-09-28 2014-03-25 Sony Computer Entertainment America Llc System and methdod for capturing and displaying still photo and video content
US20150348587A1 (en) * 2014-05-27 2015-12-03 Thomson Licensing Method and apparatus for weighted media content reduction
CN105407282A (en) * 2015-11-16 2016-03-16 中科创达软件股份有限公司 Photographing method and replay method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523410A (en) * 2011-12-28 2012-06-27 创新科存储技术(深圳)有限公司 Method for writing video data and video data storage equipment
JP2014123855A (en) * 2012-12-20 2014-07-03 Panasonic Corp Synthesized image generation device, synthesized image generation method and broadcast receiver
CN104125388A (en) * 2013-04-25 2014-10-29 广州华多网络科技有限公司 Method for shooting and storing photos and device thereof
CN103646048A (en) * 2013-11-25 2014-03-19 宇龙计算机通信科技(深圳)有限公司 Method and device for achieving multimedia pictures
CN105354219A (en) * 2015-09-28 2016-02-24 努比亚技术有限公司 File encoding method and apparatus
CN105245777A (en) * 2015-09-28 2016-01-13 努比亚技术有限公司 Method and device for generating video image
CN105187911A (en) * 2015-09-28 2015-12-23 努比亚技术有限公司 Method and device for displaying video image and image display method
CN106791361A (en) * 2016-11-22 2017-05-31 上海斐讯数据通信技术有限公司 An action shot display methods and system based on pressure sensitivity Touch Screen
CN106686298A (en) * 2016-11-29 2017-05-17 努比亚技术有限公司 Post-shooting processing method, post-shooting processing device and mobile terminal
CN107644056A (en) * 2017-08-04 2018-01-30 武汉烽火众智数字技术有限责任公司 A kind of file memory method, apparatus and system
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN109587196A (en) * 2017-09-29 2019-04-05 南京云照乐摄影有限公司 A kind of management system fast image processing and upload is facilitated to download
CN109361880A (en) * 2018-11-30 2019-02-19 三星电子(中国)研发中心 A kind of method and system showing the corresponding dynamic picture of static images or video

Also Published As

Publication number Publication date
CN110248116A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110248116B (en) Picture processing method and device, computer equipment and storage medium
US8271544B2 (en) Data file having more than one mode of operation
USRE48430E1 (en) Two-dimensional code processing method and terminal
US7386576B2 (en) Data file storage device with automatic filename creation function, data file storage program and data file storage method
CN101193075B (en) Method and apparatus for managing blog information
CN111083396B (en) Video synthesis method and device, electronic equipment and computer-readable storage medium
US20170097947A1 (en) Image Annotation for Image Auxiliary Information Storage and Retrieval
KR20050083715A (en) Method and apparatus for transmitting a digital picture with textual material
US9530453B2 (en) Apparatus, method, and computer-readable recording medium for creating and reproducing live picture file
JP2002204381A (en) Digital camera
KR100828479B1 (en) Apparatus and method for inserting addition data in image file on electronic device
US20070043792A1 (en) Image processing system
US20120251081A1 (en) Image editing device, image editing method, and program
CN111209727A (en) Picture processing method and device, electronic equipment and storage medium
KR20040042612A (en) Methods for fixing-up lastURL representing path name and file name of asset in MPV environment
US7610554B2 (en) Template-based multimedia capturing
US20140072223A1 (en) Embedding Media Content Within Image Files And Presenting Embedded Media In Conjunction With An Associated Image
US10972746B2 (en) Method of combining image files and other files
CN101482863A (en) Interest point information storage method
JP2015136089A (en) Video reproducer and video recorder
CN116088829A (en) Data processing method, device, storage medium and equipment
CN105893012A (en) Method and device for generating video screenshot in Android system
CN107851448B (en) Managing data
US20070211759A1 (en) Multiplexing device, multiplexing method, and multiplexing program
KR20150106472A (en) Method and apparatus for providing contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant