CN111263058A - Data processing method, device, storage medium and terminal - Google Patents

Data processing method, device, storage medium and terminal Download PDF

Info

Publication number
CN111263058A
CN111263058A CN202010059477.2A CN202010059477A CN111263058A CN 111263058 A CN111263058 A CN 111263058A CN 202010059477 A CN202010059477 A CN 202010059477A CN 111263058 A CN111263058 A CN 111263058A
Authority
CN
China
Prior art keywords
data
image
audio
audio file
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010059477.2A
Other languages
Chinese (zh)
Inventor
黄树伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co Ltd filed Critical Huizhou TCL Mobile Communication Co Ltd
Priority to CN202010059477.2A priority Critical patent/CN111263058A/en
Publication of CN111263058A publication Critical patent/CN111263058A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23229Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor comprising further processing of the captured image without influencing the image pickup process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1097Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for distributed storage of data in a network, e.g. network file system [NFS], transport mechanisms for storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The embodiment of the application discloses a data processing method, a data processing device, a storage medium and a terminal. The method comprises the following steps: receiving a shooting instruction; acquiring a current image according to the shooting instruction, and playing a specified audio file; encoding the pixel points in the image to obtain encoded data of the image; and establishing an association relation between the coded data of the image and the corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud. In the scheme, the audio is played simultaneously when the terminal takes a picture, the picture is stored by the audio in a segmented mode, the stored picture is synchronously input to the cloud end by the segmented audio, and when the segmented audio is output by the player, the picture corresponding to the segmented audio is synchronously output, so that the picture is not required to be stored in the terminal, and the occupation of the storage space of the terminal is reduced.

Description

Data processing method, device, storage medium and terminal
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a data processing method and apparatus, a storage medium, and a terminal.
Background
With the development of the internet and the mobile communication network, and the rapid development of the processing capability and the storage capability of the terminal, mass applications are rapidly spread and used, and the information content and the presentation mode thereof are more and more abundant.
In the related art, when the intelligent mobile terminal takes a picture, the picture obtained by taking the picture is stored in the storage space of the mobile terminal. With the increasing requirements of users on image quality, the data volume of a single photo is larger, so that the photo obtained by the terminal during photographing occupies a large amount of storage of the mobile terminal, and the storage space of the mobile terminal is greatly reduced.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, a storage medium and a terminal, which can reduce occupation of a storage space of the terminal.
In a first aspect, an embodiment of the present application provides a data processing method, including:
receiving a shooting instruction;
acquiring a current image according to the shooting instruction, and playing a specified audio file;
encoding the pixel points in the image to obtain encoded data of the image;
and establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including:
a receiving unit configured to receive a shooting instruction;
the playing unit is used for acquiring a current image according to the shooting instruction and playing a specified audio file;
the encoding unit is used for encoding the pixel points in the image to obtain encoded data of the image;
and the processing unit is used for establishing an association relation between the coded data of the image and the corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud.
In some embodiments, the data processing apparatus further comprises:
a judging unit configured to judge whether or not playing of audio data associated with the encoded data of the image is detected in a process of playing the specified audio file;
and the output unit is used for acquiring corresponding coded data from the cloud end to output and display the coded data based on the playing progress of the currently associated audio data and the association relation if the judgment unit judges that the audio data is positive.
In some embodiments, the data processing apparatus further comprises:
the marking unit is used for marking a starting point and an ending point of the audio data related to the coded data of the image before storing the related data to the cloud end;
the processing unit is specifically configured to:
when the audio data corresponding to the starting point mark is detected to be played, acquiring corresponding coded data from the cloud based on the playing progress of the currently associated audio data and the association relation;
and when the audio data corresponding to the end point mark is detected to be played completely, merging the encoded data acquired from the starting point mark to the end point mark, and displaying a corresponding picture based on the merged data.
In some embodiments, the processing unit is specifically configured to:
determining the current audio data which is being played according to the playing progress of the current associated audio data;
acquiring coded data corresponding to the current audio data in real time based on the incidence relation;
and sequentially displaying pixel points corresponding to the acquired encoded data on an audio playing interface.
In some embodiments, the processing unit is specifically configured to:
determining the shooting starting time and the shooting ending time of the image;
determining a first audio file node corresponding to the shooting starting time and a second audio file node corresponding to the playing of the shooting ending time;
acquiring target audio data between a first audio file node and a second audio file node;
and establishing an association relation between the coded data of the image and the target audio data.
In some embodiments, the encoding unit is specifically configured to:
detecting the characteristic points of the image;
determining an image feature region in the image based on the detection result;
and coding the pixel points in the image characteristic region to obtain the coded data of the image.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to execute the data processing method described above.
In a fourth aspect, an embodiment of the present application further provides a terminal, including a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is used to execute the data processing method.
In the implementation of the application, when a terminal receives a shooting instruction, a current image is collected according to the shooting instruction, and an appointed audio file is played; encoding the pixel points in the image to obtain encoded data of the image; and establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud. In the scheme, the audio is played simultaneously when the terminal takes a picture, the picture is stored by the audio in a segmented mode, the stored picture is synchronously input to the cloud end by the segmented audio, and when the segmented audio is output by the player, the picture corresponding to the segmented audio is synchronously output, so that the picture is not required to be stored in the terminal, and the occupation of the storage space of the terminal is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of a data processing method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Fig. 4 is another schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Fig. 6 is another schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a data processing method, a data processing device, a storage medium and a terminal. The details will be described below separately.
In an embodiment, a data processing method is provided, and is applied to terminal equipment with a camera, such as a smart phone, a tablet computer, and a notebook computer. Referring to fig. 1, a specific flow of the data processing method may be as follows:
101. and receiving a shooting instruction.
Specifically, when a gesture operation of a user on a shutter key or other shooting shortcut keys of a native camera, a third-party camera application, and the like in the terminal is detected, a shooting instruction may be triggered.
The gesture operation may be a click operation, a pressing operation (e.g., pressing with a certain force, pressing with a long time), a sliding operation, and the like, which is not limited herein.
In addition, the voice signal of the user can be analyzed and recognized, and when the voice signal meets the condition, the terminal can be triggered to receive the shooting instruction.
102. And acquiring a current image according to the shooting instruction, and playing the specified audio file.
Specifically, after the shooting instruction is received, the terminal camera can be driven to collect and shoot images based on the shooting instruction. And when the camera is driven to shoot, the player is automatically started to play the specified audio file. The designated audio file may be a song, a melody, or other music file.
In practical application, the audio file can be stored locally in the terminal, when the camera is started to shoot the image, the specified audio file stored locally is called to be played, and the specified audio file can be paused to be played after the image shooting is finished. When the terminal is detected to continue shooting the images through the camera, the audio file can be continuously played from the previous pause node, and the specified audio file is paused again after the current image shooting is finished, and so on.
In some embodiments, the audio file can also be stored in a three-party server, and the audio file is provided by the three-party server when the audio needs to be played, so that the terminal can download and play the audio file; and after the image shooting is finished, stopping downloading the audio data until the image shooting is detected again.
103. And coding the pixel points in the image to obtain the coded data of the image.
In this embodiment, it is necessary to convert the analog signal corresponding to the image into a digital signal recognizable by the computer. Therefore, the pixel points contained in the image can be coded to obtain the coded data of the image, so that the computer can identify the image information. In practical applications, in order to reduce the encoding workload and save terminal resources, only the effective content area in the image may be encoded. That is, in some embodiments, when encoding a pixel point in the image, the following process may be included:
(11) detecting the characteristic points of the image;
(12) determining an image feature region in the image based on the detection result;
(13) and coding the pixel points in the image characteristic region to obtain the coded data of the image.
Specifically, feature point detection is performed on the image first, and edge feature points can be detected on each entity in the image by using the geometric features of the entity. Then, the image is divided based on the detected edge feature points, so that an image feature region and a non-feature region are obtained through division. And finally, the terminal only needs to encode the pixel points in the image characteristic region to obtain the encoded data of the image.
In specific implementation, the pixel points can be encoded according to the ordered arrangement sequence of the pixel points. For example, when a plurality of entities exist in the image, the pixel points in the region where each entity is located may be sequentially encoded according to the arrangement order of the entities; for example, a key region may be identified from an image, and pixel points may be encoded so that pixel points in the key region are centered and gradually spread around.
104. And establishing an association relation between the coded data of the image and the corresponding audio data in the specified audio file according to the playing progress of the audio file, and storing the associated data to a cloud.
Specifically, after the image is coded, the image and the data in the audio file can be associated and then stored in the cloud server together, so that the image does not occupy the storage space of the terminal. When the audio data associated with the encoded data in the audio file is played again, the corresponding encoded data can be obtained from the cloud server according to the association relationship and output, so as to restore the image. That is, referring to fig. 2, in some embodiments, after storing the associated data in the cloud, the following steps may be further included:
105. in the process of playing the specified audio file, judging whether audio data associated with the encoded data of the image is detected to be played or not; if so, go to step 106, otherwise continue to perform the detection.
106. And acquiring corresponding coded data from the cloud to output and display the coded data based on the playing progress and the association relation of the currently associated audio data.
Specifically, based on the currently played audio data, the associated encoded data is acquired from the server in real time and downloaded to the terminal, and output and display are performed to restore the image. In practical applications, in order to refine the reproduction accuracy of each image, after the association relationship is established, the start point marking and the end point marking may be performed based on the position of the associated audio data in the audio data stream of the specified audio file. That is, before storing the associated data in the cloud, the audio data associated with the encoded data of the image may be marked with a start point and a finish point.
Then, when acquiring the corresponding encoded data from the cloud for output and display based on the playing progress and the association relationship of the currently associated audio data, the following process may be specifically included:
when the audio data corresponding to the starting point mark is detected to be played, acquiring corresponding coded data from the cloud based on the playing progress and the association relation of the currently associated audio data;
and when the audio data corresponding to the end point mark is detected to be played completely, merging the coded data acquired from the start point mark to the end point mark, and displaying a corresponding picture based on the merged data.
Specifically, after all the coded data, corresponding to a certain image and stored in the cloud, of the certain image are obtained based on the starting point mark to the end point mark, the obtained coded data can be read and merged to restore the image, and the restored image is displayed on the current interface.
In some embodiments, when there is an encoded data output of multiple images, the multiple images may be displayed simultaneously on the current interface. The display mode may be various, for example, a zoom image mode may be adopted, thumbnails of multiple images may be displayed on the current interface, and multiple images may also be displayed in a stacked mode.
In practical application, when the duration of displaying the corresponding generated image of the output coded data exceeds a specified duration and the total data amount of the current image is greater than a certain threshold (it can be inferred that the coded data of the image is not needed), the corresponding image can be hidden, and the coded data of the response image can be deleted from the terminal. In specific implementation, a user can also manually adjust the display position of the image on the current interface, and the more the display position is forward, the higher the corresponding importance level is. When hiding an image, the subsequent image may be preferentially selected for hiding and deletion of the encoded data corresponding thereto.
In the present embodiment, the output displayed image may also serve as a temporary album. And the access right of other application software can be granted, and the image in the application software can be edited, forwarded, stored and the like. After all operations are completed, the image displayed by the output can be subjected to data clearing.
In some embodiments, the step "acquiring corresponding encoded data from the cloud for output and display based on the playing progress and the association relationship of the currently associated audio data" may include the following steps:
determining the current audio data which is being played according to the playing progress of the currently associated audio data;
acquiring coded data corresponding to the current audio data in real time based on the incidence relation;
and sequentially displaying pixel points corresponding to the acquired encoded data on an audio playing interface.
Specifically, when the image is output and displayed based on the encoded data, the obtained encoded data may be decoded in real time to form pixel points, and the pixel points obtained by decoding (including the pixel points which have been decoded and displayed before) are displayed in real time until all the encoded data of the image are decoded, so as to restore the image.
In some embodiments, the step of "establishing an association relationship between encoded data of an image and corresponding audio data in a specified audio file according to a playing progress of the specified audio file" may include the following processes:
determining the shooting starting time and the shooting ending time of the image;
determining a first audio file node corresponding to the shooting starting moment and a second audio file node corresponding to the playing of the shooting ending moment;
acquiring target audio data between a first audio file node and a second audio file node;
and establishing an association relation between the encoded data of the image and the target audio data.
Specifically, when the association relationship between the encoded data and the audio data is constructed, in order to enable the user to perceive the content of the image output when the image is played to a certain stage of the audio file in advance, the audio data correspondingly played within the time when the image is captured may be used as the audio data to be associated with the image. That is, the shooting start time and the shooting end time of the image, and the target audio data between the data playback nodes of the audio file corresponding to the shooting start time and the shooting end time of the image, respectively, may be acquired, and then, the association relationship between the encoded data of the image and the target audio data may be established.
As can be seen from the above, in the data processing method provided in this embodiment, when the terminal receives the shooting instruction, the current image is collected according to the shooting instruction, the specified audio file is played, and the pixel points in the image are encoded, so as to obtain the encoded data of the image. And then, establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the audio file, and storing the associated data to a cloud. In the scheme, the audio is played simultaneously when the terminal takes a picture, the picture is stored by the audio in a segmented mode, the stored picture is synchronously input to the cloud end by the segmented audio, and when the segmented audio is output by the player, the picture corresponding to the segmented audio is synchronously output, so that the picture is not required to be stored in the terminal, and the occupation of the storage space of the terminal is reduced.
In another embodiment of the present application, another data processing method is also provided. The data processing method in the present embodiment will be described in detail below, taking the specified audio file as a music file as an example.
When the terminal shoots, music is played, the picture audio virtual mixing conversion module is started, and the picture audio virtual mixing conversion module converts the terminal shooting picture into picture virtual audio. The picture virtual audio is virtual dynamic data simulating audio, the virtual dynamic data is the characteristics of the photographed pictures, namely the virtual dynamic data generated by each photographed picture through the picture audio virtual mixed conversion module are different, and the picture audio virtual mixed conversion module outputs the corresponding photographed pictures according to the different virtual dynamic data.
The virtual audio of the picture is stored in the audio of the played music and is input into the cloud server corresponding to the player for storage, and then the photographed picture does not occupy the terminal for storage. The music played when the terminal shoots has n segmented audios, the segmented audios are the music played when the terminal shoots, and the shooting pictures are stored in the segmented audios through the picture audio virtual mixing conversion module. The n sectional audios store the n photographing pictures in a one-to-one correspondence mode, the n photographing pictures do not occupy the storage space of the terminal, the n sectional audios store the n photographing pictures and are synchronously input to the cloud server corresponding to the player to be stored, and the terminal storage occupation is greatly reduced.
When n segmented audios are output, the segmented audios synchronously output n photographing pictures corresponding to the n segmented audios through the picture audio virtual mixed conversion module, and normal output of the audios is not influenced.
When the audio playing at the moment of taking a picture and the picture taken through the picture audio virtual mixing conversion module synchronously, i.e. the picture audio virtual mixing conversion module takes the segmented audio a from the moment of taking a picture t0 to t1, wherein t0 is the moment when the camera module takes the information of the object to be taken and t1 is the moment when the picture taking is completed. the segmented audio A and the photographed picture a from t0 to t1 are processed by a picture audio virtual mixing conversion module, the picture audio virtual mixing conversion module converts the photographed picture a into a picture virtual audio a 'which simulates the segmented audio A signal and is stored in the segmented audio A, and the picture virtual audio a' is k groups of virtual dynamic data.
The picture audio virtual mixing conversion module takes audio data Σ [ Σ D a (n1), ∑ D a (n2),. · Σ D a (n) ] (k) of k groups of the segmented audio a', sets the picture audio virtual mixing conversion module to take audio data of the segmented audio a according to t (a) <5us, and synchronously takes feature data Σ [ ∑ D p (n1), ∑ D p (n2),..,. Σ D p (n) ] (k) of k groups of photographed pictures a according to t (p) <5us, and (a) · t (p).
The picture audio virtual mix conversion module synchronously converts the feature data sigma-sigma D p (n1), sigma-D p (n2),. and sigma D p (n) ] (k) into picture virtual audio a ', sigma-D pa (n1),. and sigma-D pa (n2),. and sigma-D pa (n) for picture virtual audio a', sigma-D pa (n2),. and sigma-D (n) ] (k) into k sets of virtual dynamic data, the virtual dynamic data are stored in the virtual dynamic data mixer (3653), the virtual dynamic data are transmitted to the picture audio data processing module (3653), the virtual dynamic data are transmitted to the picture audio processing module (1), the virtual dynamic data are transmitted to the picture processing module (72 a) and the virtual dynamic data processing module (3653), the virtual dynamic data are transmitted to the picture processing module (3653), and the virtual dynamic data are transmitted to the picture processing module (72 a) and the virtual dynamic data are processed by the sigma-D video processing module (n 86k).
Similarly, according to the method in the above step, the image audio virtual mixed conversion module takes the audio data of n segmented audios and the characteristic data of n photographed images, the image audio virtual mixed conversion module converts the characteristic data into the image virtual audio according to the audio data, and the n segmented audio storage virtual dynamic data are input into the cloud server corresponding to the player, so that the photographed images do not occupy the storage space of the terminal, and the terminal storage occupation is greatly reduced. When the player outputs the segmented audio, the shooting picture characteristics stored in the segmented audio, namely the virtual dynamic data, are converted into corresponding shooting pictures through the picture audio virtual mixing conversion module, so that the output of the segmented audio is not interfered, and the user experience is greatly enhanced.
In another embodiment of the present application, a data processing apparatus is further provided, where the data processing apparatus may be integrated in a terminal in a form of software or hardware, and the terminal may specifically include a mobile phone, a tablet computer, a notebook computer, and the like. As shown in fig. 3, the data processing apparatus 300 may include: a receiving unit 301, a playing unit 302, an encoding unit 303 and a processing unit 304, wherein:
a receiving unit 301 configured to receive a shooting instruction;
the playing unit 302 is configured to acquire a current image according to the shooting instruction and play a specified audio file;
the encoding unit 303 is configured to encode a pixel point in the image to obtain encoded data of the image;
the processing unit 304 is configured to establish an association relationship between the encoded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and store the associated data in a cloud.
Referring to fig. 4, in some embodiments, the data processing apparatus 300 may further include:
a determination unit 305 configured to determine whether or not playing of audio data associated with encoded data of the image is detected in playing of the specified audio file;
and the output unit 306 is configured to, if the determination unit determines that the audio data is the video data, obtain corresponding encoded data from the cloud for output and display based on the playing progress of the currently associated audio data and the association relationship.
With continued reference to fig. 4, in some embodiments, the data processing apparatus 300 may further include:
a marking unit 307, configured to mark a start point and an end point of audio data associated with the encoded data of the image before storing the associated data in a cloud;
the processing unit 304 is specifically configured to:
when the audio data corresponding to the starting point mark is detected to be played, acquiring corresponding coded data from the cloud based on the playing progress of the currently associated audio data and the association relation;
and when the audio data corresponding to the end point mark is detected to be played completely, merging the encoded data acquired from the starting point mark to the end point mark, and displaying a corresponding picture based on the merged data.
In some embodiments, the processing unit 304 may be specifically configured to:
determining the current audio data which is being played according to the playing progress of the current associated audio data;
acquiring coded data corresponding to the current audio data in real time based on the incidence relation;
and sequentially displaying pixel points corresponding to the acquired encoded data on an audio playing interface.
In some embodiments, the processing unit 304 may be specifically configured to:
determining the shooting starting time and the shooting ending time of the image;
determining a first audio file node corresponding to the shooting starting time and a second audio file node corresponding to the playing of the shooting ending time;
acquiring target audio data between a first audio file node and a second audio file node;
and establishing an association relation between the coded data of the image and the target audio data.
In some embodiments, the encoding unit 303 may specifically be configured to:
detecting the characteristic points of the image;
determining an image feature region in the image based on the detection result;
and coding the pixel points in the image characteristic region to obtain the coded data of the image.
As can be seen from the above, in the data processing device provided in the embodiment of the present application, when the terminal rod receives a shooting instruction, a current image is acquired according to the shooting instruction, and a specified audio file is played; encoding the pixel points in the image to obtain encoded data of the image; and establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud. In the scheme, the audio is played simultaneously when the terminal takes a picture, the picture is stored by the audio in a segmented mode, the stored picture is synchronously input to the cloud end by the segmented audio, and when the segmented audio is output by the player, the picture corresponding to the segmented audio is synchronously output, so that the picture is not required to be stored in the terminal, and the occupation of the storage space of the terminal is reduced.
In another embodiment of the present application, a terminal is further provided, where the terminal may be a terminal device such as a smart phone and a tablet computer. As shown in fig. 5, the terminal 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the terminal 400, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or loading an application stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the terminal.
In this embodiment, the processor 401 in the terminal 400 loads instructions corresponding to one or more application processes into the memory 402 according to the following steps, and the processor 401 runs the application stored in the memory 402, thereby implementing various functions:
receiving a shooting instruction;
acquiring a current image according to the shooting instruction, and playing a specified audio file;
encoding the pixel points in the image to obtain encoded data of the image;
and establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud.
In some embodiments, after storing the associated data in the cloud, the processor 401 further performs the following steps:
in the process of playing the specified audio file, judging whether audio data associated with the coded data of the image is detected to be played or not;
and if so, acquiring corresponding coded data from the cloud end to output and display the coded data based on the playing progress of the currently associated audio data and the association relation.
In some embodiments, before storing the associated data in the cloud, the processor 401 further performs the following steps:
marking the starting point and the ending point of the audio data related to the encoded data of the image;
the obtaining, based on the playing progress of the currently associated audio data and the association relationship, corresponding encoded data from the cloud for output and display includes:
when the audio data corresponding to the starting point mark is detected to be played, acquiring corresponding coded data from the cloud based on the playing progress of the currently associated audio data and the association relation;
when the audio data corresponding to the end point mark is detected to be played completely, merging the coded data acquired from the starting point mark to the end point mark, and displaying a corresponding picture based on the merged data
In some embodiments, when the corresponding encoded data is obtained from the cloud for output and display based on the playing progress of the currently associated audio data and the association relationship, the processor 401 further performs the following steps:
determining the current audio data which is being played according to the playing progress of the current associated audio data;
acquiring coded data corresponding to the current audio data in real time based on the incidence relation;
and sequentially displaying pixel points corresponding to the acquired encoded data on an audio playing interface.
In some embodiments, when the association relationship between the encoded data of the image and the corresponding audio data in the specified audio file is established according to the playing progress of the specified audio file, the processor 401 further performs the following steps:
determining the shooting starting time and the shooting ending time of the image;
determining a first audio file node corresponding to the shooting starting time and a second audio file node corresponding to the playing of the shooting ending time;
acquiring target audio data between a first audio file node and a second audio file node;
and establishing an association relation between the coded data of the image and the target audio data.
In some embodiments, when encoding the pixel points in the image to obtain the encoded data of the image, the processor 401 specifically executes the following steps:
detecting the characteristic points of the image;
determining an image feature region in the image based on the detection result;
and coding the pixel points in the image characteristic region to obtain the coded data of the image.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing instructions executable in the processor. Applications may constitute various functional modules. The processor 401 executes various functional applications and data processing by running applications stored in the memory 402.
In some embodiments, as shown in fig. 6, the terminal 400 further includes: display 403, control circuit 404, radio frequency circuit 405, input unit 406, sensor 408, and power supply 409. The processor 401 is electrically connected to the display 403, the control circuit 404, the rf circuit 405, the input unit 406, the camera 407, the sensor 408, and the power source 409.
The display screen 403 may be used to display information input by or provided to the user as well as various graphical user interfaces of the terminal, which may be constituted by images, text, icons, video, and any combination thereof.
The control circuit 404 is electrically connected to the display 403, and is configured to control the display 403 to display information.
The rf circuit 405 is used for transceiving rf signals to establish wireless communication with a terminal or other terminals through wireless communication, and to transceive signals with a server or other terminals.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The camera 407 may be used to collect image information. The camera may be a single camera with one lens, or may have two or more lenses.
The sensor 408 is used to collect external environmental information. The sensors 408 may include ambient light sensors, acceleration sensors, light sensors, motion sensors, and other sensors.
The power supply 409 is used to power the various components of the terminal 400. In some embodiments, the power source 409 may be logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
Although not shown in fig. 6, the terminal 400 may further include a speaker, a bluetooth module, and the like, which will not be described in detail herein.
As can be seen from the above, when receiving a shooting instruction, the terminal provided in the embodiment of the present application acquires a current image according to the shooting instruction, and plays a specified audio file; encoding the pixel points in the image to obtain encoded data of the image; and establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud. In the scheme, the audio is played simultaneously when the terminal takes a picture, the picture is stored by the audio in a segmented mode, the stored picture is synchronously input to the cloud end by the segmented audio, and when the segmented audio is output by the player, the picture corresponding to the segmented audio is synchronously output, so that the picture is not required to be stored in the terminal, and the occupation of the storage space of the terminal is reduced.
In some embodiments, there is also provided a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the data processing methods described above.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The data processing method, the data processing apparatus, the storage medium, and the terminal provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principles and embodiments of the present application, and the description of the embodiments above is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A data processing method, comprising:
receiving a shooting instruction;
acquiring a current image according to the shooting instruction, and playing a specified audio file;
encoding the pixel points in the image to obtain encoded data of the image;
and establishing an association relation between the coded data of the image and corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud.
2. The data processing method of claim 1, further comprising, after storing the associated data in a cloud:
in the process of playing the specified audio file, judging whether audio data associated with the coded data of the image is detected to be played or not;
and if so, acquiring corresponding coded data from the cloud end to output and display the coded data based on the playing progress of the currently associated audio data and the association relation.
3. The data processing method of claim 2, further comprising, before storing the associated data in the cloud:
marking the starting point and the ending point of the audio data related to the encoded data of the image;
the obtaining, based on the playing progress of the currently associated audio data and the association relationship, corresponding encoded data from the cloud for output and display includes:
when the audio data corresponding to the starting point mark is detected to be played, acquiring corresponding coded data from the cloud based on the playing progress of the currently associated audio data and the association relation;
and when the audio data corresponding to the end point mark is detected to be played completely, merging the encoded data acquired from the starting point mark to the end point mark, and displaying a corresponding picture based on the merged data.
4. The data processing method according to claim 2, wherein the acquiring, from the cloud, corresponding encoded data for output and display based on the playing progress of the currently associated audio data and the association relationship comprises:
determining the current audio data which is being played according to the playing progress of the current associated audio data;
acquiring coded data corresponding to the current audio data in real time based on the incidence relation;
and sequentially displaying pixel points corresponding to the acquired encoded data on an audio playing interface.
5. The data processing method of claim 1, wherein the establishing of the association relationship between the encoded data of the image and the corresponding audio data in the specified audio file according to the playing progress of the specified audio file comprises:
determining the shooting starting time and the shooting ending time of the image;
determining a first audio file node corresponding to the shooting starting time and a second audio file node corresponding to the playing of the shooting ending time;
acquiring target audio data between a first audio file node and a second audio file node;
and establishing an association relation between the coded data of the image and the target audio data.
6. The data processing method according to any one of claims 1 to 5, wherein the encoding of the pixel points in the image to obtain the encoded data of the image comprises:
detecting the characteristic points of the image;
determining an image feature region in the image based on the detection result;
and coding the pixel points in the image characteristic region to obtain the coded data of the image.
7. A data processing apparatus, comprising:
a receiving unit configured to receive a shooting instruction;
the playing unit is used for acquiring a current image according to the shooting instruction and playing a specified audio file;
the encoding unit is used for encoding the pixel points in the image to obtain encoded data of the image;
and the processing unit is used for establishing an association relation between the coded data of the image and the corresponding audio data in the specified audio file according to the playing progress of the specified audio file, and storing the associated data to a cloud.
8. The data processing apparatus of claim 7, further comprising:
a judging unit configured to judge whether or not playing of audio data associated with the encoded data of the image is detected in a process of playing the specified audio file;
and the output unit is used for acquiring corresponding coded data from the cloud end to output and display the coded data based on the playing progress of the currently associated audio data and the association relation if the judgment unit judges that the audio data is positive.
9. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the data processing method of any of claims 1-6.
10. A terminal is characterized by comprising a processor and a memory, wherein the processor is electrically connected with the memory, and the memory is used for storing instructions and data; the processor is configured to perform the data processing method of any one of claims 1-6.
CN202010059477.2A 2020-01-19 2020-01-19 Data processing method, device, storage medium and terminal Pending CN111263058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010059477.2A CN111263058A (en) 2020-01-19 2020-01-19 Data processing method, device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010059477.2A CN111263058A (en) 2020-01-19 2020-01-19 Data processing method, device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN111263058A true CN111263058A (en) 2020-06-09

Family

ID=70950864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010059477.2A Pending CN111263058A (en) 2020-01-19 2020-01-19 Data processing method, device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN111263058A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101175208A (en) * 2004-10-29 2008-05-07 三洋电机株式会社 Image coding method and apparatus, and image decoding method and apparatus
CN105045824A (en) * 2015-06-29 2015-11-11 成都亿邻通科技有限公司 Method for downloading target file
CN106033421A (en) * 2015-03-10 2016-10-19 中兴通讯股份有限公司 A file output method and a terminal
CN107426092A (en) * 2017-08-23 2017-12-01 四川长虹电器股份有限公司 A kind of implementation method of the sound photo based on wechat
US10382663B1 (en) * 2013-09-25 2019-08-13 Looksytv, Inc. Remote video system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101175208A (en) * 2004-10-29 2008-05-07 三洋电机株式会社 Image coding method and apparatus, and image decoding method and apparatus
US10382663B1 (en) * 2013-09-25 2019-08-13 Looksytv, Inc. Remote video system
CN106033421A (en) * 2015-03-10 2016-10-19 中兴通讯股份有限公司 A file output method and a terminal
CN105045824A (en) * 2015-06-29 2015-11-11 成都亿邻通科技有限公司 Method for downloading target file
CN107426092A (en) * 2017-08-23 2017-12-01 四川长虹电器股份有限公司 A kind of implementation method of the sound photo based on wechat

Similar Documents

Publication Publication Date Title
KR20140010989A (en) Video summary including a particular person
WO2020187086A1 (en) Video editing method and apparatus, device, and storage medium
CN108616696B (en) Video shooting method and device, terminal equipment and storage medium
US20200244865A1 (en) Method for capturing images, terminal, and storage medium
WO2020029523A1 (en) Video generation method and apparatus, electronic device, and storage medium
CN108900902B (en) Method, device, terminal equipment and storage medium for determining video background music
CN109819313B (en) Video processing method, device and storage medium
CN104580888A (en) Picture processing method and terminal
CN106331479B (en) Video processing method and device and electronic equipment
CN108900771B (en) Video processing method and device, terminal equipment and storage medium
CN109474850B (en) Motion pixel video special effect adding method and device, terminal equipment and storage medium
KR20210110852A (en) Image deformation control method, device and hardware device
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
CN108965705B (en) Video processing method and device, terminal equipment and storage medium
CN110572722A (en) Video clipping method, device, equipment and readable storage medium
CN107870999B (en) Multimedia playing method, device, storage medium and electronic equipment
KR101812103B1 (en) Method and program for setting thumbnail image
CN108881766B (en) Video processing method, device, terminal and storage medium
CN111263058A (en) Data processing method, device, storage medium and terminal
CN111327816A (en) Image processing method and device, electronic device and computer storage medium
CN110572716A (en) Multimedia data playing method, device and storage medium
US10200634B2 (en) Video generation method, apparatus and terminal
CN109064416B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109257544A (en) Image recording structure, image recording process and recording medium
CN111416996B (en) Multimedia file detection method, multimedia file playing device, multimedia file equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination