CN112073595A - Image processing method, device, storage medium and mobile terminal - Google Patents

Image processing method, device, storage medium and mobile terminal Download PDF

Info

Publication number
CN112073595A
CN112073595A CN202010945112.XA CN202010945112A CN112073595A CN 112073595 A CN112073595 A CN 112073595A CN 202010945112 A CN202010945112 A CN 202010945112A CN 112073595 A CN112073595 A CN 112073595A
Authority
CN
China
Prior art keywords
image
sequence
target
target image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010945112.XA
Other languages
Chinese (zh)
Inventor
俞斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Communication Ningbo Ltd
Original Assignee
TCL Communication Ningbo Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Communication Ningbo Ltd filed Critical TCL Communication Ningbo Ltd
Priority to CN202010945112.XA priority Critical patent/CN112073595A/en
Publication of CN112073595A publication Critical patent/CN112073595A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Abstract

The application discloses an image processing method, an image processing device, a storage medium and a mobile terminal, wherein the method comprises the following steps: acquiring an initial image sequence to be sent; determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity; and sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence. The method screens out the target image sequence by presetting the similarity, and reduces the transmission time and transmission resources when transmitting a plurality of images or videos consisting of a plurality of frames of images.

Description

Image processing method, device, storage medium and mobile terminal
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and a mobile terminal.
Background
In recent years, mobile terminals such as mobile phones, tablet phones, and personal digital assistants are becoming popular with users for portability, and users can transmit images to be transmitted to other terminals through Near Field Communication (NFC) or wireless fidelity (WIFI) modules on the terminals.
In the related art, when a plurality of images or a video composed of a plurality of frames of images are transmitted, a large transmission time is required, and a large amount of transmission resources are wasted.
Therefore, the prior art has defects and needs to be improved urgently.
Disclosure of Invention
The embodiment of the application provides an image processing method which can reduce transmission time and transmission resources when a plurality of images or videos consisting of a plurality of frames of images are transmitted.
The embodiment of the application provides an image processing method, which is applied to a sending terminal and comprises the following steps:
acquiring an initial image sequence to be sent;
determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity;
and sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence.
The embodiment of the present application further provides an image processing method, applied to a receiving terminal, including:
receiving a target image sequence, and identifying identification information from the target image sequence;
determining at least one image group from the target image sequence according to the identification information, wherein the images comprise a first image and a second image;
performing fusion processing on the pixel data of the first image and the pixel data of the second image to obtain a target image;
and constructing a new image sequence according to the target image and the target image sequence.
An embodiment of the present application further provides an image processing apparatus, applied to a sending terminal, including:
the acquisition module is used for acquiring an initial image sequence to be transmitted;
the first determining module is used for determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and next image is greater than a preset similarity;
and the sending module is used for sending a target image sequence, and the target image sequence at least comprises other images except the target image in the initial image sequence.
An embodiment of the present application further provides an image processing apparatus applied to a receiving terminal, including:
the device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving a target image sequence and identifying identification information from the target image sequence;
the second determining module is used for determining at least one image group from the target image sequence according to the identification information, wherein the image comprises a first image and a second image;
the fusion module is used for carrying out fusion processing on the pixel data of the first image and the pixel data of the second image to obtain a target image;
and the construction module is used for constructing a new image sequence according to the target image and the target image sequence.
An embodiment of the present application further provides a storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the image processing method as described above.
The embodiment of the present application further provides a mobile terminal, where the mobile terminal includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the image processing method described above by calling the computer program stored in the memory.
The image processing method provided by the embodiment of the application comprises the following steps: acquiring an initial image sequence to be sent; determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity; and sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence. The method screens out the target image sequence by presetting the similarity, and reduces the transmission time and transmission resources when transmitting a plurality of images or videos consisting of a plurality of frames of images.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of an image processing method according to an embodiment of the present application.
Fig. 1b is a schematic flowchart of an image processing method applied to a sending terminal according to an embodiment of the present application.
Fig. 1c is a schematic flowchart of an image processing method applied to a receiving terminal according to an embodiment of the present application.
Fig. 2a is a schematic structural diagram of an image processing apparatus applied to a sending terminal according to an embodiment of the present application.
Fig. 2b is a schematic structural diagram of an image processing apparatus applied to a receiving terminal according to an embodiment of the present application.
Fig. 3 is a specific structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a task execution method, a task execution device, a storage medium and a mobile terminal.
Referring to fig. 1a, fig. 1a is a schematic view of a scenario of a task execution system according to an embodiment of the present application, including: the sending terminal a and the receiving terminal B may be connected through a Communication network, where the Communication network may include a wireless network and a wired network or a Near Field Communication (NFC) network, where the wireless network includes one or a combination of multiple wireless wide area networks, wireless local area networks, wireless metropolitan area networks, and wireless personal area networks. The network includes network entities such as routers, gateways, etc., which are not shown in the figure. The sending terminal a and the receiving terminal B may be terminal devices such as a mobile phone, a computer, or a personal digital assistant, and the sending terminal a performs information interaction with the receiving terminal B through a communication network, for example, the sending terminal a sends a plurality of images or a video composed of a plurality of frames of images to the receiving terminal B.
The task execution system may include a task execution device, which may be specifically integrated into a sending terminal a and a receiving terminal B, as shown in fig. 1a, where the sending terminal a obtains an initial image sequence to be sent; determining the image similarity between each image in the initial image sequence and the previous and next images according to the preset similarity to determine a target image; transmitting a target image sequence not including the target image; when the receiving terminal B receives the target image sequence, determining an image group comprising a first image and a second image according to the identification information; and fusing the first image and the second image to obtain a target image. Based on the method, the transmission time and the transmission resources when a plurality of images or videos consisting of a plurality of frames of images are transmitted are reduced in a mode of screening out the target image sequence through the preset similarity.
It should be noted that the scene schematic diagram of the image processing method shown in fig. 1a is only an example, the image processing system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows that along with the evolution of the task execution system and the appearance of a new service scene, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
Referring to fig. 1b, fig. 1b is a schematic flowchart illustrating an image processing method applied to a sending terminal according to an embodiment of the present disclosure. The image processing method comprises the following steps:
step 101, obtaining an initial image sequence to be sent.
The initial image sequence is a plurality of continuous images or a plurality of frames of images forming a video, and the plurality of continuous images can be selected by a user from preset software (such as photo album software).
And 102, determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity.
The images in the image sequence have a certain arrangement sequence, and the arrangement sequence may be a selection sequence in which a user selects images from preset software, or a playing sequence in video playing. And determining a target image with the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image being greater than the preset similarity according to the arrangement sequence. It is understood that the target image is an image having little difference from the adjacent previous and next images.
In some embodiments, the step of determining a target image with an image similarity between each image in the initial image sequence and an adjacent previous image and an adjacent next image being greater than a preset similarity includes:
(1) and determining a characteristic region of each image, and determining an image with the image similarity between the characteristic region and the characteristic regions of the adjacent previous image and the adjacent subsequent image in the image sequence larger than a preset similarity as a target image.
The method comprises the steps of extracting features of an initial image sequence by adopting a preset behavior analysis model, and determining a target image by comparing the image similarity of a feature region of each image with the image similarity of the feature regions between adjacent front and back images.
Specifically, the feature extraction model is a model obtained by training a Deep Neural Network (DNN) to extract features, and the training process for the DNN includes: preparing a plurality of images in advance, labeling each image with a label, conveying the plurality of images to DNN, carrying out convolution, pooling and other processing on each image by the DNN to finally obtain an analysis result, adjusting a convolution kernel for extracting image characteristics in a convolution layer according to the analysis result, and finally obtaining a convolution kernel with more complete characteristic extraction, thereby obtaining a trained characteristic extraction model. Thus, the step of constructing the feature extraction model may comprise:
(1.1) obtaining a sample image and a pre-training result corresponding to the sample image, and constructing a feature extraction model;
and (1.2) training a feature extraction model by using the sample image and a pre-training result corresponding to the sample image to obtain the trained feature extraction model.
And 103, sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence.
After the target image is determined, other images except the target image in the initial image sequence can be sent out, so that the problem that a large amount of images or videos composed of multiple frames of images need more transmission time and more transmission resources are wasted is solved.
In some embodiments, the transmitting the sequence of target images includes:
(1.1) determining a target sequence number of the target image in the initial image sequence;
and (1.2) identifying the target image sequence according to the target sequence number, and sending the identified target image sequence.
In order to enable the receiving terminal to know that the target image which is not sent exists in the received target image sequence and simulate the target image according to the image in the target image sequence, the receiving terminal can simulate the target image according to the identification by identifying the sent target image sequence.
In some embodiments, the step of identifying the target image sequence according to the target sequence number and sending the identified target image sequence includes:
(1.2.1) sequentially transmitting a plurality of synchronous frames according to the number of the images of the initial image sequence;
(1.2.2) if the serial number of the current synchronous frame to be sent is detected to be different from the target serial number, acquiring an image with the serial number same as that of the current synchronous frame to be sent from the initial image sequence so as to enable the synchronous frame to be sent to carry the image.
When the images are transmitted, the image information of each image is carried by a synchronous frame, so that a plurality of synchronous frames can be sequentially transmitted according to the image number of the initial image sequence, and if the serial number of the current synchronous frame to be transmitted is different from the target serial number, the image with the serial number same as that of the current synchronous frame to be transmitted is obtained from the initial image sequence, so that the image is carried by the synchronous frame to be transmitted. Therefore, if the serial number of the current synchronous frame to be sent is the same as the target serial number, the synchronous frame which does not carry the image information of the image is directly sent. That is, the sync frame is divided into a sync frame carrying an image and a sync frame not carrying an image. Based on this, the receiving terminal can simulate the target image according to whether the synchronous frame carries the image or not.
In some embodiments, the step of identifying the target image sequence according to the target sequence number and sending the identified target image sequence includes:
(1.2.3) updating the sequence number of each image in the target image sequence according to the target sequence number and the sequence number of each image in the initial image sequence;
(1.2.4) sending the target image sequence with the updated sequence number.
Wherein, the sequence number of each image in the target image sequence can be updated according to the target sequence number of the target image and the sequence number of each image in the initial image sequence, for example: the image sequence includes the images of sequence number 1, sequence number 2 and sequence number 3, and if the sequence number of the target image is sequence number 2, the sequence number 2 (sequence number 3 in the initial sequence) with the sequence number modified from sequence number 1 in the target image sequence is updated to sequence number 3. The updated target image is thus: number 1 and number 3. Thus, the receiving terminal can simulate the target image according to whether the sequence numbers are continuous or not.
The image processing method provided by the embodiment of the application comprises the following steps: acquiring an initial image sequence to be sent; determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity; and sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence. The method screens out the target image sequence by presetting the similarity, and reduces the transmission time and transmission resources when transmitting a plurality of images or videos consisting of a plurality of frames of images.
In some embodiments, please refer to fig. 1c, where fig. 1c is a schematic flowchart of an image processing method applied to a receiving terminal according to an embodiment of the present disclosure. The image processing method comprises the following steps:
step 201, receiving a target image sequence, and identifying identification information from the target image sequence.
After the target image sequence is identified based on the sending terminal, the identification information can be identified from the received target image sequence.
Step 202, determining at least one image group from the target image sequence according to the identification information, wherein the image comprises a first image and a second image.
In order to simulate the target image, a group of images adjacent to the target image in the initial image sequence, namely a first image before the target image and a second image after the target image in the initial image sequence, needs to be determined according to the identification information.
In some embodiments, each image in the target image sequence is carried by a synchronization frame, and the step of determining at least one image group from the target image sequence according to the identification information includes:
(1) if a first synchronous frame carrying images is detected, and the synchronous frames which are positioned after the first synchronous frame according to the receiving sequence do not carry images, determining the images carried by the first synchronous frame as first images;
(2) if detecting that a second synchronous frame carries images and a synchronous frame before the second synchronous frame does not carry images according to the receiving sequence, determining the images carried by the second synchronous frame as second images;
(3) and determining the first image and the second image as an image group.
The method comprises the steps that a synchronous frame sent by a sending terminal is divided into a carried image and a non-carried image, so the synchronous frame is sent according to the sequence of an initial image sequence when being sent, and if a first synchronous frame carrying image is detected and the synchronous frame after the first synchronous frame carrying image is not carried according to the receiving sequence, the image carried by the first synchronous frame is determined as a first image;
and if detecting that a second synchronous frame carries images and the synchronous frame before the second synchronous frame does not carry images according to the receiving sequence, determining the images carried by the second synchronous frame as second images.
In some embodiments, each image in the target image sequence is identified with a sequence number, and the step of determining the first image and the second image from the target image sequence according to the identification information includes:
(1) and if the received serial numbers of the adjacent images are not adjacent, determining the image group of the adjacent images.
Wherein, the image identifier in the target image sequence is: in the example of the numbers 1 and 3, since the numbers of the numbers 1 and 3 are not adjacent to each other, it is known that the target image exists between the numbers 1 and 3, and therefore, the images of the numbers 1 and 3 can be specified as the image group.
And 203, fusing the first image and the second image to obtain a target image.
The similarity between the first image and the target image is high, so that the target image can be simulated by fusing the first image and the second image.
In some embodiments, the first image comprises a first pixel point;
(1) the step of performing fusion processing on the first image and the second image includes:
(2) acquiring pixel data of a second pixel point corresponding to the first pixel point in the second image;
(3) and carrying out fusion processing on the pixel data of the first pixel point and the pixel data of the second pixel point.
The method includes the steps of obtaining pixel data (for example, pixel color data) of a first pixel point in a first image, obtaining pixel data of a second pixel point corresponding to the first pixel point in a second image, and fusing the pixel data of the first pixel point and the pixel data of the second pixel point.
For example: dividing each frame of image into x y pixel points, wherein x and y are positive integers; for the ith row in the first image, the data of the pixel point of the jth column is represented as RGB (i, j); for the ith row in the second image, the data of the pixel point of the jth column is represented as RGB' (i, j). If the number of the target images is N, the data of the pixel point of the ith row and the jth column of the Mth image in the target images is RGB (i, j) + M (RGB' (i, j) -RGB (i, j))/(N + 1). Therefore, when the target images are continuous, the pixel color data of each pixel point in each target image can be determined according to the formula, and the target images are simulated.
And step 204, constructing a new image sequence according to the target image and the target image sequence.
The position of the simulated target image in the target image sequence can be determined according to the synchronous frame and the serial number in the identification information, and a new image sequence is constructed.
The image processing method provided by the embodiment of the application comprises the following steps: receiving a target image sequence, and identifying identification information from the target image sequence; determining at least one image group from the target image sequence according to the identification information, wherein the images comprise a first image and a second image; performing fusion processing on the first image and the second image to obtain a target image; and constructing a new image sequence according to the target image and the target image sequence. The method has the advantages that the transmission time and transmission resources when a plurality of images or videos composed of a plurality of frames of images are transmitted are reduced by simulating the target image which is not transmitted by the transmitting terminal through the first image and the second image.
Referring to fig. 2a, fig. 2a is a schematic structural diagram of an image processing apparatus applied to a sending terminal according to an embodiment of the present disclosure. The image processing apparatus includes: an acquisition module 31, a first determination module 32 and a sending module 33.
The obtaining module 31 is configured to obtain an initial image sequence to be sent.
The first determining module 32 is configured to determine a target image, where an image similarity between each image in the initial image sequence and an adjacent previous image and an adjacent next image is greater than a preset similarity.
A sending module 33, configured to send a target image sequence, where the target image sequence at least includes other images in the initial image sequence except for the target image.
Referring to fig. 2b, fig. 2b is a schematic structural diagram of an image processing apparatus applied to a receiving terminal according to an embodiment of the present disclosure. The image processing apparatus includes: a receiving module 41, a second determining module 42, a fusing module 43 and a building module 44.
A receiving module 41, configured to receive a target image sequence, and identify identification information from the target image sequence;
a second determining module 42, configured to determine at least one image group from the target image sequence according to the identification information, where the image group includes a first image and a second image;
a fusion module 43, configured to perform fusion processing on the pixel data of the first image and the pixel data of the second image to obtain a target image;
and a construction module 44 for constructing a new image sequence according to the target image and the target image sequence.
Based on the above method, the present invention also provides a storage medium having a plurality of instructions stored thereon, wherein the instructions are adapted to be loaded by a processor and to perform the image processing method as described above.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Fig. 3 is a block diagram showing a specific structure of a terminal according to an embodiment of the present invention, which can be used to implement the image processing method, the storage medium, and the terminal provided in the above embodiments.
As shown in fig. 3, the mobile terminal 1200 may include an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer-readable storage media (only one shown), an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a transmission module 170, a processor 180 including one or more processing cores (only one shown), and a power supply 190. Those skilled in the art will appreciate that the mobile terminal 1200 configuration illustrated in fig. 3 is not intended to be limiting of the mobile terminal 1200 and may include more or less components than those illustrated, or some components in combination, or a different arrangement of components. Wherein:
the RF circuitry 110 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuit 110 may communicate with various networks such as the internet, an intranet, a wireless network, or with a second device over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network.
The memory 120 may be used to store software programs and modules, such as program instructions/modules corresponding to the image processing method, apparatus, storage medium and mobile terminal in the above embodiments, and the processor 180 executes various functional applications and data processing, i.e. functions for mutual chip identification, by operating the software programs and modules stored in the memory 120. Memory 120 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or a second non-volatile solid-state memory. In some examples, memory 120 may be a storage medium as described above.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller.
The display unit 140 may be used to display information input by or provided to the user and various graphic user interfaces of the mobile terminal 1200, which may be configured by graphics, text, icons, video, and any combination thereof. The display unit 140 may include a display panel 141, and further, the touch-sensitive surface 131 may cover the display panel 141. The display interface of the mobile terminal in the above embodiment may be represented by the display unit 140, that is, the display content for displaying the shot may be displayed by the display unit 140.
The mobile terminal 1200 may also include at least one sensor 150, such as a light sensor, a motion sensor, and a second sensor. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or the backlight when the mobile terminal 1200 is moved to the ear. As for the second sensor such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured in the mobile terminal 1200, the detailed description is omitted here.
Audio circuitry 160, speaker 161, and microphone 162 may provide an audio interface between a user and mobile terminal 1200. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the processor 180 for processing, and then to the RF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 120 for further processing. The audio circuitry 160 may also include an earbud jack to provide communication of peripheral headphones with the mobile terminal 1200.
The mobile terminal 1200, which can help a user send and receive e-mails, browse web pages, access streaming media, etc., provides the user with wireless broadband internet access through the transmission module 170.
The processor 180 is a control center of the mobile terminal 1200, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 1200 and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby integrally monitoring the mobile phone. Optionally, processor 180 may include one or more processing cores; in some embodiments, the processor 180 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
Specifically, the processor 180 includes: an Arithmetic Logic Unit (ALU), an application processor, a Global Positioning System (GPS) and a control and status Bus (Bus) (not shown).
The mobile terminal 1200 also includes a power supply 190 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 180 via a power management system in some embodiments to provide management of power, and power consumption via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a re-power system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the mobile terminal 1200 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein.
Specifically, in the present embodiment, the display unit 140 of the mobile terminal 1200 is a touch screen display, and the mobile terminal 1200 further includes a memory 120 and one or more programs, wherein the one or more programs are stored in the memory 120, and the one or more programs configured to be executed by the one or more processors 180 include instructions for:
acquiring an initial image sequence to be sent;
determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity;
and sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence.
In some embodiments, processor 380 may also execute instructions to, when sending a sequence of target images:
determining a target sequence number of the target image in the initial image sequence;
and identifying the target image sequence according to the target sequence number, and sending the identified target image sequence.
In some embodiments, when identifying the target image sequence according to the target sequence number and transmitting the identified target image sequence, the processor 380 may further execute the following instructions:
sequentially sending a plurality of synchronous frames according to the number of the images of the initial image sequence;
and if the sequence number of the current synchronous frame to be sent is detected to be different from the target sequence number, acquiring an image with the sequence number same as that of the current synchronous frame to be sent from the initial image sequence so as to enable the synchronous frame to be sent to carry the image.
In some embodiments, when identifying the target image sequence according to the target sequence number and transmitting the identified target image sequence, the processor 380 may further execute the following instructions:
updating the sequence number of each image in the target image sequence according to the target sequence number and the sequence number of each image in the initial image sequence;
and sending the target image sequence with the updated sequence number.
In some embodiments, when determining that the image similarity between each image in the initial image sequence and the adjacent previous image and next image is greater than the target image with the preset similarity, the processor 380 may further execute the following instructions:
and determining a characteristic region of each image, and determining an image with the image similarity between the characteristic region and the characteristic regions of the adjacent previous image and the adjacent subsequent image in the image sequence larger than a preset similarity as a target image.
In some embodiments, processor 380 may also execute instructions to:
receiving a target image sequence, and identifying identification information from the target image sequence;
determining at least one image group from the target image sequence according to the identification information, wherein the images comprise a first image and a second image;
performing fusion processing on the first image and the second image to obtain a target image;
and constructing a new image sequence according to the target image and the target image sequence.
In some embodiments, when each image in the target image sequence is carried by a synchronization frame, and the at least one image group is determined from the target image sequence according to the identification information, the processor 380 may further execute the following instructions:
if a first synchronous frame carrying images is detected, and the synchronous frames which are positioned after the first synchronous frame according to the receiving sequence do not carry images, determining the images carried by the first synchronous frame as first images;
if detecting that a second synchronous frame carries images and a synchronous frame before the second synchronous frame does not carry images according to the receiving sequence, determining the images carried by the second synchronous frame as second images;
and determining the first image and the second image as an image group.
In some embodiments, when each image in the target image sequence is identified with a sequence number, and the first image and the second image are determined from the target image sequence according to the identification information, the processor 380 may further execute the following instructions:
and if the received serial numbers of the adjacent images are not adjacent, determining the image group of the adjacent images.
In some embodiments, the first image includes a first pixel point, and when the first image and the second image are fused, the processor 380 may further execute the following instructions:
acquiring pixel data of a second pixel point corresponding to the first pixel point in the second image;
and carrying out fusion processing on the pixel data of the first pixel point and the pixel data of the second pixel point.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The image processing method, the image processing apparatus, the storage medium, and the mobile terminal provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the embodiments above is only used to help understanding the technical solution and the core idea of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (10)

1. An image processing method applied to a transmitting terminal, comprising:
acquiring an initial image sequence to be sent;
determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and the adjacent next image is greater than a preset similarity;
and sending a target image sequence, wherein the target image sequence at least comprises other images except the target image in the initial image sequence.
2. The image processing method according to claim 1, wherein the transmitting of the target image sequence comprises:
determining a target sequence number of the target image in the initial image sequence;
and identifying the target image sequence according to the target sequence number, and sending the identified target image sequence.
3. The image processing method according to claim 2, wherein the step of identifying the target image sequence according to the target sequence number and sending the identified target image sequence comprises:
sequentially sending a plurality of synchronous frames according to the number of the images of the initial image sequence;
if the serial number of the current synchronous frame to be sent is detected to be different from the target serial number, acquiring an image with the serial number same as that of the current synchronous frame to be sent from the initial image sequence so as to enable the synchronous frame to be sent to carry the image;
or, the step of identifying the target image sequence according to the target sequence number and sending the identified target image sequence includes:
updating the sequence number of each image in the target image sequence according to the target sequence number and the sequence number of each image in the initial image sequence;
and sending the target image sequence with the updated sequence number.
4. The method according to claim 1, wherein the step of determining the target image with the image similarity between each image in the initial image sequence and the adjacent previous and next images being greater than a preset similarity comprises:
and determining a characteristic region of each image, and determining an image with the image similarity between the characteristic region and the characteristic regions of the adjacent previous image and the adjacent subsequent image in the image sequence larger than a preset similarity as a target image.
5. An image processing method applied to a receiving terminal is characterized by comprising the following steps:
receiving a target image sequence, and identifying identification information from the target image sequence;
determining at least one image group from the target image sequence according to the identification information, wherein the images comprise a first image and a second image;
performing fusion processing on the first image and the second image to obtain a target image;
and constructing a new image sequence according to the target image and the target image sequence.
6. The image processing method according to claim 5, wherein each image in the target image sequence is carried by a synchronization frame, and wherein the step of determining at least one image group from the target image sequence based on the identification information comprises:
if a first synchronous frame carrying images is detected, and the synchronous frames which are positioned after the first synchronous frame according to the receiving sequence do not carry images, determining the images carried by the first synchronous frame as first images;
if detecting that a second synchronous frame carries images and a synchronous frame before the second synchronous frame does not carry images according to the receiving sequence, determining the images carried by the second synchronous frame as second images;
determining the first image and the second image as an image group;
or, each image mark in the target image sequence has a sequence number, and the step of determining the first image and the second image from the target image sequence according to the mark information includes:
and if the received serial numbers of the adjacent images are not adjacent, determining the image group of the adjacent images.
7. The image processing method according to claim 5, wherein the first image includes a first pixel point;
the step of performing fusion processing on the first image and the second image includes:
acquiring pixel data of a second pixel point corresponding to the first pixel point in the second image;
and carrying out fusion processing on the pixel data of the first pixel point and the pixel data of the second pixel point.
8. An image processing apparatus applied to a transmission terminal, comprising:
the acquisition module is used for acquiring an initial image sequence to be transmitted;
the first determining module is used for determining a target image of which the image similarity between each image in the initial image sequence and the adjacent previous image and next image is greater than a preset similarity;
and the sending module is used for sending a target image sequence, and the target image sequence at least comprises other images except the target image in the initial image sequence.
9. An image processing apparatus applied to a receiving terminal, comprising:
the device comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving a target image sequence and identifying identification information from the target image sequence;
the second determining module is used for determining at least one image group from the target image sequence according to the identification information, wherein the image comprises a first image and a second image;
the fusion module is used for carrying out fusion processing on the pixel data of the first image and the pixel data of the second image to obtain a target image;
and the construction module is used for constructing a new image sequence according to the target image and the target image sequence.
10. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to perform the image processing method according to any one of claims 1 to 7.
CN202010945112.XA 2020-09-10 2020-09-10 Image processing method, device, storage medium and mobile terminal Pending CN112073595A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010945112.XA CN112073595A (en) 2020-09-10 2020-09-10 Image processing method, device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010945112.XA CN112073595A (en) 2020-09-10 2020-09-10 Image processing method, device, storage medium and mobile terminal

Publications (1)

Publication Number Publication Date
CN112073595A true CN112073595A (en) 2020-12-11

Family

ID=73663374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010945112.XA Pending CN112073595A (en) 2020-09-10 2020-09-10 Image processing method, device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN112073595A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157793A1 (en) * 2004-01-15 2005-07-21 Samsung Electronics Co., Ltd. Video coding/decoding method and apparatus
CN106303546A (en) * 2016-08-31 2017-01-04 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN107886560A (en) * 2017-11-09 2018-04-06 网易(杭州)网络有限公司 The processing method and processing device of animation resource
CN108364338A (en) * 2018-02-06 2018-08-03 阿里巴巴集团控股有限公司 A kind of processing method of image data, device and electronic equipment
CN108804980A (en) * 2017-04-28 2018-11-13 合信息技术(北京)有限公司 Switching detection method of video scene and device
CN111277895A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Video frame interpolation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157793A1 (en) * 2004-01-15 2005-07-21 Samsung Electronics Co., Ltd. Video coding/decoding method and apparatus
CN106303546A (en) * 2016-08-31 2017-01-04 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN108804980A (en) * 2017-04-28 2018-11-13 合信息技术(北京)有限公司 Switching detection method of video scene and device
CN107886560A (en) * 2017-11-09 2018-04-06 网易(杭州)网络有限公司 The processing method and processing device of animation resource
CN108364338A (en) * 2018-02-06 2018-08-03 阿里巴巴集团控股有限公司 A kind of processing method of image data, device and electronic equipment
CN111277895A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Video frame interpolation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖永豪等: "基于视频对象的自适应去帧/插帧视频处理", 《华南理工大学学报(自然科学版)》 *

Similar Documents

Publication Publication Date Title
EP3370204B1 (en) Method for detecting skin region and device for detecting skin region
US10181203B2 (en) Method for processing image data and apparatus for the same
CN111176602B (en) Picture display method and device, storage medium and intelligent device
WO2020048392A1 (en) Application virus detection method, apparatus, computer device, and storage medium
CN110458921B (en) Image processing method, device, terminal and storage medium
CN109189300A (en) A kind of view circularly exhibiting method and apparatus
CN110555171A (en) Information processing method, device, storage medium and system
CN112749362A (en) Control creating method, device, equipment and storage medium
CN109561255B (en) Terminal photographing method and device and storage medium
CN109639981B (en) Image shooting method and mobile terminal
CN111984803B (en) Multimedia resource processing method and device, computer equipment and storage medium
CN111556248B (en) Shooting method, shooting device, storage medium and mobile terminal
CN110728167A (en) Text detection method and device and computer readable storage medium
CN109922256B (en) Shooting method and terminal equipment
CN108595104B (en) File processing method and terminal
CN108628534B (en) Character display method and mobile terminal
CN108509509A (en) Webpage display process, device, mobile terminal and storage medium
CN112073595A (en) Image processing method, device, storage medium and mobile terminal
CN110503189B (en) Data processing method and device
CN110764852B (en) Screenshot method, terminal and computer readable storage medium
CN110958352B (en) Network signal display method, device, storage medium and mobile terminal
CN111064886B (en) Shooting method of terminal equipment, terminal equipment and storage medium
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN113590069A (en) Switching method, mobile terminal and storage medium
CN109033297B (en) Image display method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201211