CN111857515A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111857515A
CN111857515A CN202010721960.2A CN202010721960A CN111857515A CN 111857515 A CN111857515 A CN 111857515A CN 202010721960 A CN202010721960 A CN 202010721960A CN 111857515 A CN111857515 A CN 111857515A
Authority
CN
China
Prior art keywords
image
pixel
pixel point
point
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010721960.2A
Other languages
Chinese (zh)
Other versions
CN111857515B (en
Inventor
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010721960.2A priority Critical patent/CN111857515B/en
Publication of CN111857515A publication Critical patent/CN111857515A/en
Priority to PCT/CN2021/095553 priority patent/WO2022016981A1/en
Application granted granted Critical
Publication of CN111857515B publication Critical patent/CN111857515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of carrying out image matting on an original image to obtain a base image and at least one image splitting packet corresponding to the original image, and transmitting the base image and the at least one image splitting packet to a receiving device, wherein the receiving device is used for displaying the base image and displaying image information corresponding to the image splitting packet on the base image. By adopting the embodiment of the application, the loading display time of the image can be shortened, and the image display speed is improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
With the development of communication technology, the functions on electronic devices are increasing. The electronic device can display the downloaded image for the user to view.
Currently, in the process of displaying an image, an electronic device generally downloads (or buffers) all image data of an image to be displayed, and loads and displays all image data of the image after the downloading is completed.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment, which can allocate a proper processor cluster to a service thread. The technical scheme of the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
carrying out image matting on an original image to obtain a base image corresponding to the original image and at least one image unpacking;
and transmitting the base image and the at least one image splitting packet to a receiving device, wherein the receiving device is used for displaying the base image and displaying the image information corresponding to the image splitting packet on the base image.
In a second aspect, an embodiment of the present application provides another image processing method, including:
acquiring a basic image and at least one image depacketizing packet transmitted by a sending device, wherein the basic image and the at least one image depacketizing packet are generated after the sending device performs image matting on an original image;
and displaying the basic image, and displaying the image information corresponding to the image split packet on the basic image.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including:
the original image matting module is used for carrying out image matting processing on an original image to obtain a basic image corresponding to the original image and at least one image unpacking packet;
and the image data sending module is used for transmitting the basic image and the at least one image splitting packet to receiving equipment, and the receiving equipment is used for displaying the basic image and displaying the image information corresponding to the image splitting packet on the basic image.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including:
the image data acquisition module is used for acquiring a basic image and at least one image unpacking packet transmitted by a sending device, wherein the basic image and the at least one image unpacking packet are generated after image matting processing is carried out on an original image by the sending device;
and the image data display module is used for displaying the basic image and displaying the image information corresponding to the image split packet on the basic image.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in one or more embodiments of the present application, a sending device may perform image matting on an original image to obtain a base image and at least one image splitting packet corresponding to the original image, and then transmit the base image and the at least one image splitting packet to a receiving device, where the receiving device is configured to display the base image and display image information corresponding to the image splitting packet on the base image; by carrying out image matting and splitting on the original image, splitting the original image into a basic image with a small memory compared with the original image and at least one image splitting packet, and sequentially sending the basic image and the at least one image splitting packet to the receiving equipment, the receiving equipment can display the basic image of the original image after the basic image is received for the first time, and display the image information of the received image splitting packet on the basic image, so that the loading display time during image display is shortened.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another image processing method provided in the embodiments of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 4 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 5 is a scene schematic diagram of a difference quotient representation method related to an image processing method provided in an embodiment of the present application;
FIG. 6 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an original image matting module according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a matting position determining unit provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of another image processing apparatus provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of an image data display module according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of another electronic device provided in an embodiment of the present application;
FIG. 14 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 15 is an architectural diagram of the android operating system of FIG. 13;
FIG. 16 is an architectural diagram of the IOS operating system of FIG. 13.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the related art, an electronic device (e.g., a receiving device for displaying an image) needs to display the image after all image data of the image is downloaded, and when an internal memory of the image is large, the electronic device needs to download the image for a long time to display the image, which results in a long loading and displaying time of the image.
The present application will be described in detail with reference to specific examples.
In one embodiment, as shown in fig. 1, an image processing method is specifically proposed, which can be implemented by means of a computer program and can be run on an image processing apparatus based on the von neumann architecture. The computer program may be integrated into an application or may be run as an independent tool application, and the following description will be made in detail by taking an image processing apparatus as an example of a transmitting device.
Wherein the sending device includes but is not limited to: a server, a wearable device, a handheld device, a personal computer, a tablet, an in-vehicle device, a smartphone, a computing device, or other processing device connected to a wireless modem, and so forth. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, receiving device, terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), electronic device in a 5G network or future evolution network, and the like.
Specifically, the image processing method includes:
step S101: and carrying out image matting on the original image to obtain a base image corresponding to the original image and at least one image unpacking packet.
The image refers to a description or portrayal of a similarity, vividness of a natural thing or an objective object (human, animal, plant, landscape, etc.), or the image can be understood to be a representation of a natural thing or an objective object (human, animal, plant, landscape, etc.) that contains information about the object being described. Usually the image is a picture with a visual effect. The original image can be understood as an image to be image-processed in this embodiment. The original image may be a photograph, a drawing, a clip art, a map, a satellite cloud picture, a movie picture, an X-ray film, an electroencephalogram, an electrocardiogram, etc., and in practical applications, the original image is usually used to be transmitted to a receiving device for displaying after image processing.
The base image may be understood as an image after the image processing method of the embodiment of the present application is performed on the original image, the memory occupied by the basic image is smaller than that of the original image, the image resolution of the basic image is smaller than that of the original image, and in practical application, after the original image is processed, the problem that the original image can be displayed after the receiving equipment needs to receive the original image when the memory of the original image is large in the related technology can be avoided, it can be understood that the receiving device only needs to display the image after receiving the base image and the at least one image splitting packet, that is, the receiving device only needs to receive the base image and then can load and display the base image corresponding to the original image, and image information corresponding to the image split packet can be sequentially displayed on the base image, and the image display time in the whole process is greatly shortened.
Specifically, the sending device may obtain an original image to be transmitted to the sending device, where the original image to be transmitted may be an image acquired by an image acquisition system of the sending device, that is, an image acquired by a camera system composed of a camera of the sending device, or may be an image obtained by the sending device from an internet or other electronic devices. Then, carrying out image matting processing on an original image to be transmitted, wherein the image matting processing is usually carried out based on at least one pixel point in the original image, carrying out at least one round of image matting processing on the original image, obtaining image subpackages after each round of processing, and paraphrasing the original image by one round of matting processing as follows:
determining cutout pixel points in an original image according to a preset pixel selection algorithm, namely, the position of at least one pixel point to be cutout, acquiring the pixel value of each cutout pixel point, and filtering the 'pixel value of each cutout pixel point' in the original image, for example, setting the pixel value of each cutout pixel point to be 0 or a reference pixel value; storing the obtained pixel values of the matting pixel points and the position mapping relation of the matting pixel points in the original image, and storing the pixel values of the matting pixel points and the position mapping relation so as to generate an image unpacking packet corresponding to the original image;
it can be understood that, in practical application, the original image is subjected to multi-round matting processing according to the above manner, at least one image unpacking packet corresponding to the original image can be obtained, after the multi-round image matting processing, the 'pixel value of each matting pixel point' in the original image is usually filtered, and the original image after the 'pixel value of each matting pixel point' is filtered, that is, the base image.
Optionally, the pixel selection algorithm may be determined according to an actual application environment, and may be based on a linear pixel selection algorithm, that is, selection of each cutout pixel point has a certain function linear mapping relationship, and if a pixel selection interval is set, a cutout pixel point is selected every n pixel points; the method can be based on a nonlinear pixel selection algorithm, that is, the selection of each cutout pixel point does not have a certain function linear mapping relation, and the selection of each cutout pixel point can be discrete selection if common selection of each pixel point is common.
Step S102: and transmitting the base image and the at least one image splitting packet to a receiving device, wherein the receiving device is used for displaying the base image and displaying the image information corresponding to the image splitting packet on the base image.
Specifically, after performing image matting processing on an original image to generate the base image and the at least one image depacketizing packet, a sending device may sequentially transmit the base image and the at least one image depacketizing packet to a receiving device; the receiving device may display the base image without waiting for the data of the original image to be completely downloaded after receiving the base image, and perform packet parsing on at least one image split packet on the basis of the base image after receiving the at least one image split packet, so as to sequentially display image information corresponding to the image split packet on the base image. The sending device may sequentially transmit the basic image and the at least one image split packet, and the receiving device may also sequentially receive the basic image and the at least one image split packet sent by the sending device, and then display the basic image and sequentially display image information corresponding to the image split packet on the basic image.
In this embodiment of the present application, a sending device may perform image matting on an original image to obtain a base image and at least one image splitting packet corresponding to the original image, and then sequentially transmit the base image and the at least one image splitting packet to a receiving device, where the receiving device is configured to display the base image and sequentially display image information corresponding to the image splitting packet on the base image; by carrying out image matting and splitting on the original image, splitting the original image into a basic image with a small memory compared with the original image and at least one image splitting packet, and sequentially sending the basic image and the at least one image splitting packet to the receiving equipment, the receiving equipment can conveniently display the basic image of the original image after the basic image is received for the first time, and sequentially display the image information of the received image splitting packet on the basic image, so that the loading display time during image display is shortened, and the image display speed is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of another embodiment of an image processing method provided in the present application, and the method is applied to a sending device. Specifically, the method comprises the following steps:
step S201: determining a reference point in the original image, and determining at least one first pixel point in the original image based on a preset pixel selection algorithm and the reference point.
The reference points can be understood as reference matting points for performing a round of matting processing on an original image, and all first pixel points for the round of matting processing can be obtained based on the reference points and by combining a preset pixel selection algorithm. The number of the reference matting points can be one or a plurality of, and under the condition that the reference matting points are multiple, the sending device can respectively determine at least one first pixel point based on the reference matting points in a serial or parallel mode.
The first pixel points can be understood as the matting pixel points to be matting in the original image.
The preset pixel selection algorithm can be determined according to the actual application environment, and can be based on a linear pixel selection algorithm, namely, the selection of each cutout pixel point has a certain function linear mapping relation, if the pixel selection interval is set, a first pixel point is selected every n pixel points, namely, the first pixel points are selected at equal intervals; the method can be based on a nonlinear pixel selection algorithm, that is, the selection of each first pixel point does not have a certain function linear mapping relation, and the common selection of each pixel point can be discrete selection of each first pixel point.
A pixel selection algorithm can be that the total selection number x of first pixel points is set, based on the reference points (a, b), a first pixel point is selected every two pixel points according to the selection rule corresponding to the pixel selection algorithm, and if the reference points are used as reference points, the selection of the first pixel points with the quantity indicated by the total selection number x is to be selected.
A pixel selection algorithm may be that, the total number of the first pixels is not limited, but only the reference points (a, b) and the selection rules corresponding to the pixel selection algorithm are used as references, for example, a first pixel is selected every two pixels, and after all the pixels corresponding to the original image are selected according to the selection rules to finish the first pixel, the selection process is completed.
One pixel selection algorithm may be: selecting a first pixel point based on the pixel value of the reference point, wherein a reference point (a, b) is set, the pixel value of the reference point is a, and all first pixel points indicated by a target pixel value matched with the pixel value of the reference point are determined to be selected, the matching with the pixel value of the reference point can be that the difference value between the target pixel value and the pixel value a is smaller than a pixel threshold value b, and when the difference value between the pixel value of all pixel points in the original image and the pixel value a is smaller than b, all corresponding pixel points are taken as first pixel points, wherein all first pixel points in the original image in the round are also the "first matting position in the original image" mentioned in some embodiments.
Step S202: and acquiring a first pixel value of the at least one first pixel point, and generating an image unpacking.
The process of unpacking an image is explained as follows:
1. after determining at least one first pixel point in the original image based on a preset pixel selection algorithm and the reference point, the sending device obtains a first pixel value of the at least one first pixel point, that is, obtains the first pixel values of all the first pixel points in the original image, and after obtaining the first pixel values, the sending device can determine the position information of each first pixel point in a preset matrix;
in the embodiment of the application, the preset matrix is used for storing the first pixel value of the deducted first pixel point in the form of an image matrix, and each matrix point in the preset matrix is used for storing the first pixel value of the first pixel point.
The position information may be understood as a matrix position (e.g., ith row and jth column) of the first pixel point in the preset matrix, and may be understood as filling the first pixel value of the first pixel point into the matrix position based on the matrix position of the matrix point corresponding to the first pixel point in the preset matrix when the first pixel value of a certain first pixel point is stored.
Further, the position information of each first pixel point in the preset matrix is usually determined when the pixel point selection is performed on the original image, for example, the position information of each first pixel point in the preset matrix can be determined based on the row and column positions of the first pixel points in the original image matrix corresponding to the original image according to the size sequence of the row and column values.
2. And adding the first pixel value of the first pixel point to the preset matrix based on the position information to generate an image difference matrix.
The image difference matrix is an image matrix corresponding to the first pixel value of each first pixel after the first pixel value is filled into the preset matrix, namely the image difference matrix.
Specifically, each first pixel point is filled into a specified matrix point in the preset matrix according to the position information of each first pixel point. If the position information can be a row value and a column value of a matrix point in the preset matrix, the sending equipment fills a first pixel value of the first pixel point into a position corresponding to the row value and the column value. By analogy, the writing of the pixel values from the first pixel point to the last first pixel point in the preset matrix can be completed, and thus the image difference matrix is generated.
In the embodiment of the present application, the type of the pixel value is not limited, and may be an RGB type pixel value, a gray scale type pixel value, or the like.
3. And generating an image unpacking packet based on the image difference matrix, the reference point and the pixel selection algorithm.
Specifically, the sending device generates an image difference matrix, and then may perform packet encapsulation processing and/or packet compression processing on the image difference matrix, the reference point, and the pixel selection algorithm, thereby generating an image depacketization packet.
Step S203: and generating a target image except the image subpackage based on the at least one first pixel point and the original image.
In this embodiment of the present application, multiple rounds of image splitting processing may be performed on an original image, an image splitting packet may be generated according to the above manner based on each round of image splitting processing, and while each round of image splitting processing generates an image splitting packet, a sending device may reduce an internal memory occupied by the original image. The method comprises the following specific steps:
in a possible implementation manner, the sending device sets the first pixel values of all the first pixels in the original image as a target pixel value, illustratively, the target pixel value may be 0 or may be a pixel value other than 0, and in practical applications, the pixel value may be smaller than the target pixel value of the first pixel value, which is not specifically limited herein. Preferably, the target pixel value may be 0, and at this time, for an image in which the first pixel point of the original image is set, that is, the target image, the memory occupied by the target image is reduced compared with the original image, and the target image other than the image unpacking may be obtained by using the above manner.
In a possible implementation manner, after setting the first pixel values of all the first pixel points in the original image as the target pixel values, the sending device may perform image reduction processing on the original image, where, for example, assuming that an image matrix corresponding to the original image is an M × N matrix, and an image difference matrix formed by the pixel values of at least one first pixel point is an M × N matrix, the sending device determines that the reduction order of the original image is (M-M, N-N). And filtering all first pixel points in the original image based on the order reduction order, thereby obtaining a target image after order reduction, namely the target image except the image subpackage.
Through steps S201 to S203, determining a matting position (i.e. a position of at least one first pixel point in the original image) in the original image based on the reference point, and then correspondingly generating an image unpacking packet and a target image other than the image unpacking packet based on the matting position can be achieved.
Step S204: and when the target image does not meet the matting condition, determining a second matting position in the target image, and executing the step of obtaining the first pixel value of the at least one first pixel point to generate an image subpackage.
Specifically, the receiving device may perform multiple rounds of image splitting processing on an original image, and based on each round of image splitting processing, may generate an image splitting packet and a target image except the image splitting packet in the manner described above, it may be understood that after a current round of image matting processing is completed, the receiving device may determine a second matting position in the target image, where the second matting position is a pixel point of a next round of to-be-clipped image, specifically, determine a next reference point in the target image, determine at least one first pixel point in the target image based on a preset pixel selection algorithm and the next reference point, where a corresponding position of each first pixel point in the current round in the target image is a second matting position, and then execute steps S202-S203, that is, execute the acquiring a first pixel value of the at least one first pixel point, and generating an image subpackage, and generating a target image except the image subpackage based on the at least one first pixel point and the original image.
Wherein, the next reference point can be the reference points preset by the receiving device before processing the original image, the positions of the reference points can be the same or different, and in addition, the step S205 can be referred to for explanation.
Step S205: and when the target image meets the condition of stopping matting, obtaining at least one image split packet and a base image except the at least one image split packet.
The condition of stopping the image matting can be understood as a condition of ending the image matting processing on the original image, and can also be understood as a condition of ending the image matting processing on the target image by a current pair of wheels.
A condition of stopping the matting can be the number of times of matting, the number of times of matting is correlated with the total number of the image depacketizing packets, a threshold value of the number of times of matting can be preset by a sending device, when the number of times of current matting indicated by a current round of target images is equal to the threshold value of the number of times of matting, it is determined that the target images meet the condition of stopping the matting, the target images are taken as basic images at the moment, so that at least one image depacketizing packet and the basic images except the at least one image depacketizing packet are obtained, otherwise, when the number of times of current matting indicated by a current round of target images is smaller than the threshold value of the number of times of matting, it is determined that.
A stopping matting condition can be a memory size corresponding to a target image, a sending device can preset a memory threshold, if the current memory of a current round of target images is smaller than or equal to the memory threshold, the target image is determined to meet the stopping matting condition, at this time, the target image is taken as a basic image, so that at least one image split packet and a basic image except the at least one image split packet are obtained, otherwise, if the current memory of the current round of target images is larger than the memory threshold, the target image is determined not to meet the stopping matting condition, and step S204 is executed.
A cutout stopping condition can be the total number of first pixel points in the original image in the image cutout processing process, if the total number of image subpackages indicated by the current round of target images is smaller than a number threshold value, the target images are determined to meet the cutout stopping condition, the target images are taken as basic images at the moment, so that at least one image subpackage and basic images except the at least one image subpackage are obtained, otherwise, if the total number of the image subpackages indicated by the current round of target images is larger than or equal to the number threshold value, the target images are determined not to meet the cutout stopping condition, and the step S204 is executed.
It should be noted that the stop matting condition is determined based on the actual application environment, and is not specifically limited here.
Step S206: the communication quality of the receiving device and the display resolution of the receiving device are obtained.
The communication quality may be understood as at least one communication parameter that measures the network condition of the receiving device.
Specifically, after the communication link between the sending device and the receiving device is established, the sending device may have a communication link monitoring mechanism, and may monitor the current communication link through the communication state monitoring mechanism, so as to obtain the communication quality of the receiving device, specifically, the sending device may obtain at least one communication parameter of the first communication link, calculate a communication quality score according to each communication parameter, and judge the communication state of the receiving device based on the communication quality score, specifically:
one calculation method may be setting different or the same weight values for each communication parameter, performing weighting calculation based on each communication parameter and the weight values, and obtaining a current communication quality score;
one of the calculation methods may be to set reference parameter characteristics (such as a reference indication value, a reference indication range, a reference indication distance, and the like) for each communication parameter, calculate difference characteristic information (such as a difference communication parameter value) for each communication parameter in at least one communication parameter and the parameter characteristics corresponding to the communication parameter, score according to the difference characteristic information, and set a scoring level, for example, three levels, when scoring is performed according to the difference characteristic information: level a > level B > C, which is defined by the data connection class parameters including two communication parameters as an example: and calculating a differential communication value a of the communication parameter A1 and the reference indication value A, and taking the score corresponding to the grade B as the current communication quality score when the differential communication value a reaches the value corresponding to the grade B.
Wherein the communication parameters include, but are not limited to, at least one of Reference Signal Receiving Power (RSRP) of the uplink/downlink data Signal of the current communication antenna, Received Signal Code Power (RSCP), Ratio of received chip Signal strength and Noise strength (EcIo)/Ratio of modulated bit Power and Noise spectral density (EcNo)/Signal-to-Noise Ratio (Signal-to-Noise Ratio, SNR)/Reference Signal received quality (Reference Signal Receiving quality, RSRQ), bit error Rate (bitterror Ratio, BER)/block error Rate (blockrr, BLER)/Packet error Rate (Packet error Ratio, PER) of the received Signal, and the like, to realize the evaluation of the communication condition of the communication link of the Receiving device in the current communication link, of course, other parameters may be measured to evaluate the communication condition of the current communication link.
It should be noted that there are many communication parameters of the measured communication link, which may be one or more of the fits mentioned above, and this is not limited specifically here.
In a possible implementation manner, the sending device may input each acquired communication parameter into a trained score determination model, and output a communication quality score. The method comprises the steps of obtaining communication sample data in an actual application environment, extracting characteristic information, marking a score corresponding to the communication sample data, wherein the characteristic information comprises at least one communication parameter (RSSI, SNR, RSCP and the like), and creating a score determining model. The score determination model may be trained by using a large number of communication samples, for example, the score determination model may be implemented based on at least one of a Convolutional Neural Network (CNN) model, a Deep Neural Network (DNN) model, a Recurrent Neural Network (RNN), a model, an embedding (embedding) model, a Gradient Boosting Decision Tree (GBDT) model, and a Logistic Regression (LR) model, and the score determination model may be trained based on sample data labeled with a score, so that the trained score determination model may be obtained.
Optionally, the communication quality score of the feedback communication quality of the receiving device may be actively sent by the receiving device, for example, the receiving device sends the current communication quality to the sending device when acquiring an image; or, the sending device may send an acquisition request for the communication quality to the receiving device, and the receiving device feeds back the current communication quality to the sending device based on the acquisition request.
The display resolution of the receiving device can be pre-stored in the sending device, and the sending device can obtain the display resolution of the receiving device at the local terminal; or, the receiving device sends the current display resolution to the sending device when acquiring the image; or, the sending device may send a resolution obtaining request to the receiving device for the display resolution, and the receiving device feeds back the current display resolution to the sending device based on the resolution obtaining request.
Step S207: determining a reference number of target image subpackets among the at least one of the image subpackets based on the communication quality and the display resolution.
In a possible implementation manner, the sending device pre-stores a correspondence between the communication quality, the display resolution, and a reference number, where the reference number is smaller than the total number of the image splitting packets, and the correspondence may be in the form of a correspondence set, and the sending device may determine a reference number corresponding to "the communication quality (e.g., the communication quality score) and the display resolution" in the correspondence set, it is understood that the sending device may transmit a target image splitting packet indicated by the reference number to the receiving device based on the reference number, and determine corresponding image data meeting the display requirement of the receiving device based on an actual application environment, such as the communication quality, the display specification, and the like, corresponding to the receiving device, on one hand, transmission overhead in the image transmission process may be saved, on the other hand, actual usage scenarios of the receiving device may be better met, if the display resolution of the receiving device is smaller than the image resolution of the original image, the effect of the original image cannot be displayed even if the receiving device receives the entire original image data.
In one possible implementation, the sending device may determine the reference resolution of the receiving device for the "at least one image splitting map and the base image" based on the at least one image splitting map and the base image, e.g., the total number of image splitting packets is n, the sending device sequentially determines "resolution 1 corresponding to 1 image splitting map and the base image", "resolution 2 corresponding to 2 image splitting maps and the base image", and "resolution X corresponding to n image splitting maps and the base image", the sending device determines a target resolution matching the display resolution based on the display resolution among the plurality of reference display rates, the target resolution being less than or equal to the display resolution of the receiving device, determines the designated number of image splitting maps and the base image based on the target resolution, meanwhile, the maximum transmission quantity of the image splitting diagrams and the basic images capable of accommodating transmission are determined according to the communication quality, and it can be understood that the image splitting diagrams and the basic images exceeding the maximum transmission quantity are used for transmission, so that the packet loss rate of image data is high, and the transmission delay is long. The receiving device determines a reference number indicated by a minimum value of the maximum number of transmissions and the specified number.
Step S208: and transmitting the basic image and the reference number of target image split packets to a receiving device, wherein the receiving device is used for displaying the basic image and displaying image information corresponding to the target image split packets on the basic image.
For details, reference may be made to the related definitions in step S102, which are not described herein again.
In a specific implementation scenario, in step S203, after performing order reduction on the original image, the receiving device may generate image coding information corresponding to the base image, and send the image coding information to the receiving device, so that the receiving device may perform image restoration on the base image after the order reduction based on the image coding information, which is specifically as follows:
1. after each pair of wheels is reduced in order, the sending equipment acquires the position mapping relation between each pixel point in the target image and the original image and the image size of the original image;
the position mapping relation is used for representing the position of each pixel point in the current target image in the original image. It can be understood that when the target image is reduced, only the coordinates of the first pixel points in the matrix point of the original image matrix corresponding to the original image are needed to be stored.
2. Generating image coding information corresponding to the target image based on the position mapping relation and the image size;
specifically, after acquiring the position mapping relationship and the image size, the sending device may generate image coding information including the position mapping relationship and the image size, where the image coding information is used to instruct the receiving device to display the base image based on the image size and the position mapping relationship, and perform image restoration based on the base image based on the sequentially received image unpacking.
3. When the target image is the base image, the target image usually meets the condition of stopping matting at this time, and at this time, the sending device uses the target image of the current round or the latest round as the base image.
In this embodiment of the present application, a sending device may perform image matting on an original image to obtain a base image and at least one image splitting packet corresponding to the original image, and then transmit the base image and the at least one image splitting packet to a receiving device, where the receiving device is configured to display the base image and display image information corresponding to the image splitting packet on the base image; the original image is subjected to image matting and splitting, and is split into a base image with a small memory compared with the original image and at least one image splitting packet, and the base image and the at least one image splitting packet are sequentially sent to the receiving equipment, so that the receiving equipment can conveniently display the base image of the original image after the base image is firstly received, and the received image information of the image splitting packet is sequentially displayed on the base image, and the loading and displaying time during image display is shortened; in the process of generating the basic image, the original image can be subjected to order reduction processing, so that the memory occupied by the receiving equipment in the received basic image is greatly reduced; the communication quality and the display resolution of the receiving equipment can be brought into the image display process for reference, and the corresponding number of image subpackages and basic images can be determined for sending, so that the transmission overhead in the image transmission process can be saved, and the actual use scene of the receiving equipment can be better met.
In one embodiment, as shown in fig. 3, an image processing method is specifically proposed, which can be implemented by means of a computer program and can be run on an image processing apparatus based on the von neumann architecture. The computer program may be integrated into an application or may be run as an independent tool application, and the following description will be made in detail by taking an image processing apparatus as a receiving device.
Wherein the receiving device includes but is not limited to: a server, a wearable device, a handheld device, a personal computer, a tablet, an in-vehicle device, a smartphone, a computing device, or other processing device connected to a wireless modem, and so forth. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, receiving device, terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), electronic device in a 5G network or future evolution network, and the like.
Specifically, the image processing method includes:
step S301: the method comprises the steps of obtaining a basic image and at least one image unpacking packet transmitted by a sending device, wherein the basic image and the at least one image unpacking packet are generated after image matting processing is carried out on an original image by the sending device.
The base image can be understood as an image obtained by performing an image matting processing method on an original image, and the original image can be obtained by performing image restoration on the base image and the at least one image unpacking packet. The memory occupied by the basic image is usually much smaller than the original image, the image resolution of the basic image is smaller than the resolution of the original image, it can be understood that the basic image has lower image precision compared with the original image, but the display effect of the basic image has higher similarity to the display effect of the original image, in practical application, after the sending device processes the original image, the transmitting device can avoid that the receiving device can display the original image after receiving the original image when the memory of the original image in the related technology is larger by using the transmitted basic image and at least one image split packet, it can be understood that the receiving device only needs to display the image after receiving the basic image and at least one image split packet, that is, the receiving device only needs to receive the basic image and then can load and display the basic image corresponding to the original image, and can display the image information corresponding to the image split packet on the basic image, the time of image display in the whole process is greatly shortened.
The sending device may sequentially transmit the basic image and the at least one image split packet, and the receiving device may also sequentially receive the basic image and the at least one image split packet sent by the sending device, and then display the basic image and sequentially display image information corresponding to the image split packet on the basic image.
Specifically, after the receiving device establishes a communication link with a sending device that sends image data, the receiving device may obtain, from the sending device, a base image and at least one image depacketizing packet that are sequentially transmitted by the sending device, and if the receiving device may send an image obtaining request for an original image to the sending device, the sending device receives and responds to the image obtaining request, performs image matting on the original image to obtain a base image and at least one image depacketizing packet corresponding to the original image, and then sequentially transmits the base image and the at least one image depacketizing packet to the receiving device based on the communication link with the receiving device, at this time, the receiving device may sequentially obtain the base image and the at least one image depacketizing packet that are transmitted by the sending device; for another example, the sending device may actively send the base image and the at least one image unpacking packet to the receiving device, illustratively, when monitoring that the communication link between the sending device and the receiving device is normal, the sending device actively sends the base image and the at least one image unpacking packet to the receiving device in sequence.
Further, the communication link between the receiving device and the sending device may generally be a communication connection service established between the two ends by using a preset communication architecture, where the communication architecture refers to a communication structure for performing data communication, and defines various aspects of a data network communication system, including interface types of communication, network protocols used, implemented data frameworks, types of communication wiring, and the like. Common communication architectures may be TCP/IP architectures, Netty architectures, C/S architectures, SOA architectures, and the like. For example, one of the communication architectures may be a Netty framework based on java open source, and cooperate with a WebSocket technology to implement a communication link for establishing a long connection (or a short connection) between a receiving device and a sending device in a communication network, and implement interaction of communication data between two ends based on the communication link. The communication link based on long connections and the communication link based on short connections will be explained in detail below, as follows:
the established communication link may be a http long-connection communication link or a http short-connection communication link.
A long connection means that multiple packets can be sent continuously over one connection, and during the connection hold period, if no packet is sent, a link check packet needs to be sent in both directions.
The operation steps of the long connection are as follows: establish a connection-data transfer (maintain a connection).
The short connection means that when both communication parties have data interaction, a connection is established, and after the data transmission is completed, the connection is disconnected, that is, only one service is transmitted in each connection.
The short connection operation steps are as follows: establishing connection-data transmission-closing connection.
Long connections are often used for frequent, point-to-point communications. Each TCP connection needs three-step handshake, which requires time, and if each operation is a short connection, the processing speed is reduced greatly if the operation is repeated, so that each operation is not disconnected after the operation is completed, and the data packet is OK when the next processing is performed, and the TCP connection does not need to be established. For example: the connection of the database uses a long connection, if the communication is frequent with a short connection, the socket error is caused, and the frequent socket creation is also a waste of resources.
Http services like WEB sites generally use short links, because long connections consume certain resources for receiving devices and sending devices, while connections of thousands or even billions of clients, which are frequent like WEB sites, use short connections, which saves some resources. Therefore, the concurrency is high, but each user needs to use the short link without frequent operation.
The long connection can save more TCP establishment and closing operations, reduce waste and save time. In practical applications, when the communication data transmitted by the first communication link is real-time image data with high requirement on communication transmission quality, a communication link based on a long connection can be adopted, for example, a communication link based on a long connection is adopted between the receiving device and the sending device
In this case, the long connection is easy to manage for both communication parties, and the existing connections are all useful connections, and no additional control means are required. But if the client requests frequently, time and bandwidth will be wasted on the TCP set up and shut down operations. Therefore, in the embodiment of the present application, the receiving device may establish a communication connection, such as a long connection, between the sending device and the receiving device in an appropriate manner according to the environment of actual communication data transmission.
Step S302: and displaying the basic image, and displaying the image information corresponding to the image split packet on the basic image.
Specifically, after sequentially obtaining the base image and the at least one image depacketizing packet generated by performing image matting processing on the basis of the original image, the receiving device may also sequentially output the base image and the at least one image depacketizing packet as images; in one possible implementation, the receiving device may first obtain the base image, then obtain at least one image-splitting packet in sequence, it will be appreciated that the receiving device may, after receiving the base image, not have to wait for the data of the original image to be completely downloaded (i.e., the base image and all image packets), i.e., the base image may be displayed in advance, and upon receiving at least one image-split packet in sequence, based on the order in which the image-split packets were received, performing packet analysis on each image split packet on the basis of displaying the basic image to acquire image information for restoring and displaying the basic image in the image split packet, then sequentially displaying the image information corresponding to the image split packet on the basic image, namely, the image information is subjected to image analysis processing, so that the original image corresponding to the basic image is restored step by step.
In a possible implementation manner, the receiving device may acquire, to the transmitting device, an image with a resolution that matches the communication quality of the current network and the display resolution of the receiving device, and it is understood that the transmitting device splits the original image into one base image and n image splits, the receiving device may acquire, based on the current actual application environment (communication quality and display resolution), one base image and m image splits (m is smaller than n) that the transmitting device determines to transmit based on "communication quality and display resolution of the receiving device", and when the receiving device performs image display based on the base image and the m image splits, the resolution of the displayed image is a resolution that matches the actual application environment of the receiving device. That is, the receiving device does not need to display the entire original image at this time. The method comprises the following specific steps:
1. the method comprises the steps that a receiving device obtains current communication quality and display resolution of the receiving device, and the communication quality and the display resolution are sent to a sending device.
Specifically, after the communication link between the receiving device and the sending device is established, the receiving device may have a communication link monitoring mechanism, and may monitor the current communication link through the communication state monitoring mechanism, so as to obtain the current communication quality of the receiving device, specifically, the receiving device may obtain at least one communication parameter of the first communication link, calculate a communication quality score according to each communication parameter, and evaluate the communication status of the receiving device based on the communication quality score. The specific implementation steps can refer to the related definitions in step S205, and are not described herein again. The receiving device transmits to the transmitting device over the communication link with the transmitting device after determining the current communication quality and display resolution.
2. The receiving device acquires the base images sequentially transmitted by the transmitting device, and acquires at least one image depacketization packet of a reference number determined by the transmitting device based on the communication quality and the display resolution.
The step of packetizing at least one image of the reference number determined by the sending device according to the communication quality and the display resolution may specifically refer to step S207, and is not described herein again.
Further, at this time, the receiving device only needs to display the basic image, and sequentially displays the image information corresponding to the at least one image split packet indicated by the reference number on the basic image. It can be understood that the sending device may transmit the target image depacketization packet indicated by the reference number to the receiving device based on the reference number, so that the receiving device determines corresponding image data meeting the display requirement of the receiving device based on the corresponding actual application environment, such as communication quality, display specification, and the like.
In the embodiment of the application, a receiving device acquires a basic image and at least one image splitting packet which are sequentially transmitted by a sending device, the basic image and the at least one image splitting packet are generated after image matting is performed on an original image by the sending device, and then the basic image is displayed, and image information corresponding to the image splitting packet is displayed on the basic image. By acquiring a base image with a small memory after the original image is subjected to image matting and splitting and at least one image splitting packet, the receiving equipment can display the base image corresponding to the original image after the base image is received for the first time, and display the image information of the received image splitting packet on the base image in sequence, so that the loading and displaying time during image display is shortened, and the image display speed is improved; in the process of generating the basic image, the original image can be subjected to order reduction processing, so that the memory occupied by the receiving equipment in the received basic image is greatly reduced; the communication quality and the display resolution of the receiving equipment can be brought into the image display process for reference, so that the sending equipment can conveniently determine the corresponding number of image subpackages and basic images for sending, the transmission cost in the image transmission process can be saved, and the actual use scene of the receiving equipment can be better met.
Referring to fig. 4, fig. 4 is a schematic flowchart of another embodiment of an image processing method provided in the present application, and the method is applied to a receiving device. Specifically, the method comprises the following steps:
step S401: the method comprises the steps of obtaining a basic image and at least one image unpacking packet transmitted by a sending device, wherein the basic image and the at least one image unpacking packet are generated after image matting processing is carried out on an original image by the sending device.
Specifically, refer to step S301, which is not described herein again.
Step S402: and determining a first pixel point in the basic image and a second pixel point except the first pixel point.
According to some embodiments, the first pixel point is a pixel point obtained after subtracting a pixel value in a process of generating a base image by image matting an original image by a sending device, and the sending device usually updates the pixel value of the first pixel point to a target pixel value, for example, the target pixel value may be 0.
Specifically, when the receiving device actually determines the first pixel point in the base image, one way is to obtain a pixel point whose pixel value is a target pixel value (e.g., 0) in the base image as the first pixel point, and the other way is to traverse the pixel value of each pixel point in the base image, determine the target pixel value whose number of specific pixel values is the largest, and then use the pixel point corresponding to the target pixel value as the target pixel point.
Specifically, after determining a first pixel point in the base image, the receiving device may use a pixel point in the base image except the first pixel point as a second pixel point, and it can be understood that a pixel value of the second pixel point is not processed in the image matting process performed on the original image, that is, is consistent with a pixel value of the pixel point in the original image.
Step S403: and calculating a first reference pixel value corresponding to the first pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of the second pixel point.
In this embodiment of the present application, since the pixel value of the second pixel point is not processed in the image matting process performed on the original image, when the base image is displayed according to the specification (such as size, image content, and the like) of the original image, the first pixel point updated to the target pixel value, on which the base image has been subjected to the image matting process, needs to be approximately restored, so as to reduce the pixel difference between the first pixel point and the second pixel point when the base image is displayed. And the receiving equipment calculates a first reference pixel value corresponding to the first pixel point through a preset difference value fitting algorithm and based on a second pixel point which is not processed in the image matting processing based on the difference value fitting algorithm. Therefore, approximate reduction of the actual pixel value of the first pixel point is realized.
The calculation of the first reference pixel value using a preset difference fitting algorithm is explained in detail below:
in the embodiment of the present application, the difference fitting algorithm uses the known second pixel point to establish an expression of a suitable interpolation function f (x), and the unknown first pixel point xi can calculate the function value f (xi) by the difference fitting algorithm f (x), so that (xi, f (xi)) is the approximate pixel value of the second pixel point.
For n different pixel points in the basic image, based on the mathematical idea, a polynomial of (n-1) degree can be used: a is0+a1x+a2x2+...+an-1xn-1These points can be represented.
One difference fitting method may be a difference fitting algorithm based on the lagrangian mathematical idea, as follows:
assuming that the coordinates x and the pixel values y of n pixels are known, i.e. (x1, y1), (x2, y2),. (xn, yn), an expression of an interpolation function y representing the pixels is constructed (y is equivalent to f (x) above),
then the values from the first pixel point to the nth pixel point can be expressed as:
y1=a0+a1x1+a2x12+...+an-1x1n-1
Y2=a0+a1x2+a2x22+...+an-1x2n-1
......
yn=a0+an x+a2xn2+...+an-1xnn-1
the pixel points can be simplified to obtain the expression of the polynomial corresponding to the difference fitting algorithm as follows:
Figure BDA0002600347960000091
it can be understood that when a first reference pixel value of a certain first pixel point is required, only the reference points (i.e., the second pixel points) before the first pixel point are required to be sequentially substituted into the pixel values capable of obtaining the first pixel point, and so on, the first reference pixel values corresponding to the first pixel points can be obtained.
One difference fitting approach may be a difference fitting algorithm based on newton's mathematical thought, as follows:
suppose there is an nxn image (n rows and n columns), the ith pixel; f (xi, yi) represents a pixel value of a certain point in a matrix corresponding to the image, and further, knowing coordinates x and a pixel value y of n pixel points, that is, (x1, y1), (x2, y2),. (xn, yn), the following explains a relevant basic definition of a difference fitting algorithm based on the mathematical newton thought:
f (x) at xiHas a 0 th order difference quotient of f (x)i);
f (x) at xiAnd xjHas a first order difference quotient of
Figure BDA0002600347960000092
f (x) at xi、xj、xkA second order difference quotient of
Figure BDA0002600347960000093
...
The forward difference of order n of f (x) can be expressed as
Figure BDA0002600347960000094
Further, the above can be expressed as a difference quotient table as shown in fig. 5, and an n-order difference quotient of x can be determined in fig. 5. Based on the difference quotient table shown in fig. 5, the same-column verl method can be adopted, that is, the difference quotient is made by using the difference quotient of the same row in the previous column and the difference quotient of the same row in the previous column each time.
Then, based on the above formula, continuously iterating and substituting to eliminate, reserving an interpolation approximation function without unknown point x in the derivation process, and eliminating the remainder with unknown point x, thereby generating a function formula for approximately expressing the first pixel point of the basic image, as follows:
Figure BDA0002600347960000101
and h is a coefficient, is determined when fitting representation is performed on the first pixel point of the basic image, is determined based on an actual application environment, and is not specifically limited here.
It can be understood that when a first reference pixel value of a certain first pixel point is required, only the reference points (i.e., the second pixel points) before the first pixel point are required to be sequentially substituted into the pixel values capable of obtaining the first pixel point, and so on, the first reference pixel values corresponding to the first pixel points can be obtained.
Step S404: and updating the first pixel value of the first pixel point in the basic image into the first reference pixel value.
Specifically, after determining the first reference pixel value of the first pixel point according to step S403, usually the number of the first pixel points is at least one in practical application, and the receiving device updates the first pixel value of the first pixel point in the base image to the first reference pixel value.
Step S405: and acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image depacketizing packet in the at least one image depacketizing packet.
Specifically, after sequentially acquiring at least one image split packet sent by the sending device, the receiving device performs packet analysis on each image split packet on the basis that the basic image has been displayed based on the sequence of receiving each image split packet, and acquires image information used for restoring and displaying the basic image in the image split packet.
In practical application, after the sending device determines all first pixel points of a round of image matting in the original image, first pixel values corresponding to the first pixel points are obtained and filled in a preset matrix, and therefore the image difference matrix is generated.
The reference points can be understood as reference matting points for carrying out one round of matting processing on the original image by the sending device, and all first pixel points of one round of matting processing can be obtained by combining a preset pixel selection algorithm based on the reference points in practical application. The number of the reference matting points can be one or a plurality of, and under the condition that the reference matting points are multiple, the sending device can respectively determine at least one first pixel point based on the reference matting points in a serial or parallel mode. In the embodiment of the present application, the reference point is used to assist in determining the fourth pixel point associated with each matrix point of the image difference matrix in the image matrix corresponding to the base image.
In the embodiment of the present application, the receiving device determines all points to be scratched in the base image when the sending device performs image scratching, that is, a fourth pixel point corresponding to the current image subpackage in the base image, based on the reference point and the pixel selection algorithm.
Step S406: and determining the fourth pixel point associated with each matrix point of the image difference matrix in the image matrix corresponding to the basic image based on the reference point and the pixel selection algorithm.
And the fourth pixel point is a pixel point which is required to be filled with the reference pixel value to the relevant position based on the reference pixel value of each matrix point in the image matrix by the receiving equipment.
The preset pixel selection algorithm is determined by the sending equipment according to the practical application environment, and can be based on a linear pixel selection algorithm, namely, the sending equipment has a certain function linear mapping relation in the selection of each matting pixel point, if the pixel selection interval is set, a first pixel point is selected every n pixel points, namely, the first pixel points are selected at equal intervals; the method can be based on a nonlinear pixel selection algorithm, that is, the selection of each first pixel point does not have a certain function linear mapping relation, and the common selection of each pixel point can be discrete selection of each first pixel point.
For example, a pixel selection algorithm may be that, when the sending device performs image matting on the original image, a total selection number x of first pixel points is set, and based on the reference points (a, b), according to a selection rule corresponding to the pixel selection algorithm, if the reference points are used as references, one first pixel point is selected every two pixel points, and the number of first pixel points indicated by the total selection number x is to be selected.
When the receiving device determines the fourth pixel point, the receiving device may determine a fourth pixel point every two pixel points based on the position of the reference point in the base image, and so on until the fourth pixel point indicated by the "total selection number x" is selected.
For example, a pixel selection algorithm may be that, when the sending device performs image matting on the original image, the total number of first pixels is not limited, but only the reference points (a, b) and the selection rule corresponding to the pixel selection algorithm are used as references, and if one first pixel is selected every two pixels, the selection process is completed after all the pixels corresponding to the original image are selected according to the selection rule to finish the first pixels.
And when determining the fourth pixel point, the receiving device can determine a fourth pixel point every two pixel points based on the position of the reference point in the basic image, and so on until all pixel points corresponding to the original image select the fourth pixel point according to the selection rule.
It should be noted that there are various pixel selection algorithms, which are determined according to the actual application environment, and it can be understood that the process of the receiving device selecting or determining the fourth pixel point in the base image is also the inverse process of the transmitting device selecting or determining the first pixel point in the original image.
Step S407: and acquiring a reference pixel value of the matrix point, and updating a fourth pixel value of the fourth pixel point to the reference pixel value in an image matrix corresponding to the basic image.
Specifically, the receiving device obtains a reference pixel value corresponding to each matrix point in the image difference matrix, and then updates a fourth pixel value of a fourth pixel point to the reference pixel value according to a fourth pixel point corresponding to the matrix point in the image matrix corresponding to the basic image. And then the process of sequentially displaying the image information corresponding to the image split packet on the basic image is completed.
In the embodiment of the application, a receiving device acquires a basic image and at least one image splitting packet transmitted by a sending device, wherein the basic image and the at least one image splitting packet are generated after image matting is performed on an original image by the sending device, then the basic image is displayed, and image information corresponding to the image splitting packet is displayed on the basic image. By acquiring a base image with a small memory and at least one image split packet after the original image is subjected to image matting and splitting, the receiving equipment can display the base image corresponding to the original image after the base image is received for the first time, and can display the received image information of the image split packet on the base image in sequence, so that the loading and displaying time during image display is shortened, and the image display speed is improved; in the process of generating the basic image, the original image can be subjected to order reduction processing, so that the memory occupied by the receiving equipment in the received basic image is greatly reduced; the communication quality and the display resolution of the receiving equipment can be brought into the image display process for reference, so that the sending equipment can conveniently determine the corresponding number of image subpackages and basic images for sending, the transmission cost in the image transmission process can be saved, and the actual use scene of the receiving equipment can be better met.
Referring to fig. 6, fig. 6 is a schematic flowchart of another embodiment of an image processing method provided in the present application, and the method is applied to a receiving device. Specifically, the method comprises the following steps:
step S501: the method comprises the steps of obtaining a basic image and at least one image unpacking packet transmitted by a sending device, wherein the basic image and the at least one image unpacking packet are generated after image matting processing is carried out on an original image by the sending device.
Specifically, refer to step S301, which is not described herein again.
Step S502: acquiring the image coding information carried by the basic image, and determining the position mapping relation between at least one second pixel point in the basic image and the original image and the image size of the original image based on the image coding information.
In this embodiment, the base image is an image obtained by performing image reduction processing on an original image, and therefore the receiving device needs to perform image restoration on the base image based on image coding information carried by the base image.
The image coding information may be a position mapping relationship between each pixel point (i.e., a second pixel point in the present application in the real-time example) in the base image and the original image, and an image size of the original image.
Illustratively, when the sending device performs image reduction processing on the original image, assuming that an image matrix corresponding to the original image is an M × N matrix, and an image difference matrix formed by pixel values of at least one pixel point which has been subjected to matting is an M × N matrix, the sending device determines that the reduction order of the original image is (M-M, N-N). And filtering all pixels which are already scratched in the original image based on the order reduction order, thereby obtaining a basic image after the order reduction, namely the basic image except the image subpackage. Meanwhile, after reducing the order of each pair of target images (namely, the images of each pair of original images after image matting), the sending equipment obtains the position mapping relationship between each pixel point in the target images and the original images and the image size of the original images, and then generates image coding information based on the position mapping relationship and the image size.
Therefore, the receiving device can extract the position mapping relation between at least one second pixel point in the basic image and the original image contained in the image coding information and the image size of the original image only by performing information analysis on the image coding information.
Step S503: and performing image restoration display on at least one second pixel point in the basic image according to the image size and the position mapping relation.
Specifically, the receiving device obtains the image size and the position mapping relationship, then determines at least one second pixel point in the basic image, and performs image restoration display on the at least one second pixel point. The method comprises the following specific steps:
1. constructing an initial matrix corresponding to the original image according to the image size;
specifically, the receiving device first constructs an initial matrix corresponding to the original image based on the image size, for example, the image size is a × b, and the receiving device can construct a corresponding initial matrix based on the number x of the pixels corresponding to each image unit corresponding to the image size, that is, the matrix specification of the initial matrix is "ax × ab". When building, the initialization processing of matrix point pixels is carried out on matrix points in the initial matrix, for example, the pixel value of each matrix point is set to 0.
2. Adding at least one second pixel point in the basic image to the initial matrix based on the position mapping relation;
specifically, the receiving device adds at least one second pixel point in the base image to the initial matrix based on the "position mapping relationship between the at least one second pixel point in the base image and the original image", that is, based on the position of the second pixel point in the initial matrix of the original image, that is, in a specific implementation, setting a pixel value of a matrix point corresponding to the second pixel point on the initial matrix as a second pixel value, and setting the matrix point as the second pixel point.
3. Determining at least one third pixel point except the second pixel point in the initial matrix;
specifically, in this embodiment, since the base image is an image after the order reduction, that is, after all the second pixel points in the base image are restored to the initial matrix, there may be a third pixel point whose pixel value of the matrix point in the initial matrix is the initial pixel value (e.g., 0), and the receiving device may determine at least one third pixel point in the initial matrix except for the second pixel point.
3. Calculating a second reference pixel value corresponding to the third pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of at least one second pixel point in the basic image;
in this embodiment of the present application, since the pixel value of the second pixel performs image matting on the original image, when the original image is displayed according to the specification (such as size, image content, and the like) of the original image based on the base image, it is necessary to approximately restore a third pixel in the initial matrix of the original image corresponding to the base image, so as to reduce the pixel difference between pixels when the original image corresponding to the base image is displayed. The receiving device calculates a first reference pixel value corresponding to the third pixel point based on a second pixel point (i.e. a second pixel value of at least one second pixel point) which is not processed in the image matting process based on the difference fitting algorithm through a preset difference fitting algorithm. Therefore, approximate reduction of the actual pixel value of the third pixel point is realized.
The step of calculating the second reference pixel value corresponding to the third pixel point by using a preset difference fitting algorithm may refer to a related definition of "calculating the first reference pixel value corresponding to the first pixel point by using a preset difference fitting algorithm" in step S403, which is not repeated herein.
5. And updating the third pixel value of the third pixel point to be the second reference pixel value.
Step S504: and acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image depacketizing packet in the at least one image depacketizing packet.
Specifically, refer to step S405, which is not described herein again.
Step S505: and updating the third pixel value of a third pixel point in the initial matrix based on the reference point and the pixel selection algorithm to obtain the original image information corresponding to the basic image.
Specifically, when the base image is received to display the base image, the third pixel value of the third pixel point is an approximate pixel value determined by using a difference fitting algorithm, and when the image unpacking transmitted by the transmitting device is sequentially acquired, the reference pixel value at each matrix point of the image difference matrix in the image unpacking is the actual third pixel value at the initial matrix of the original image, so that the receiving device performs image restoration on the reference pixel value at each matrix point of the image difference matrix included in the image unpacking after receiving the image unpacking by the transmitting device, specifically, the process is as follows,
1. determining the third pixel point associated with each matrix point of the image difference matrix in the initial matrix based on the reference point and the matrix point determination algorithm.
And the fourth pixel point is a pixel point which is required to be filled with the reference pixel value to the relevant position in the initial matrix based on the reference pixel value of each matrix point in the image matrix by the receiving equipment.
The preset pixel selection algorithm is determined by the sending equipment according to the practical application environment, and can be based on a linear pixel selection algorithm, namely, the sending equipment has a certain function linear mapping relation in the selection of each matting pixel point, if the pixel selection interval is set, a first pixel point is selected every n pixel points, namely, the first pixel points are selected at equal intervals; in this embodiment, the first pixel point is determined by performing image-clipping processing on the original image during image coding, the pixel value of the matrix point in the image differential matrix is generated based on the pixel value of the first pixel point, and the pixel value of the matrix point in the image differential matrix needs to be filled into the third pixel point in the initial matrix.
For example, a pixel selection algorithm may be that, when the sending device performs image matting on the original image, the sending device sets a total selection number x of first pixel points, and based on the reference points (a, b), selects a third pixel point every two pixel points according to a selection rule corresponding to the pixel selection algorithm, for example, with the reference point as a reference, and selects the third pixel points with the number indicated by the total selection number x to be selected.
When the receiving device determines the third pixel point, the receiving device may determine a third pixel point every two pixel points based on the position of the reference point in the base image, and so on until the third pixel point indicated by the "total selection number x" is selected.
For example, a pixel selection algorithm may be that, when the sending device performs image matting on the original image, the total selection number of the first pixel points is not limited, but only needs to be based on the reference points (a, b) and the selection rule corresponding to the pixel selection algorithm, and with reference to the reference points, if a third pixel point is selected for every two pixel points, after the third pixel point is selected according to the selection rule for all the pixel points corresponding to the original image, the selection process is completed.
And when determining the third pixel point, the receiving device can determine a third pixel point every two pixel points based on the position of the reference point in the basic image, and so on until all pixel points corresponding to the original image select the third pixel point according to the selection rule.
It should be noted that there are various pixel selection algorithms, which are determined specifically according to the actual application environment, and it can be understood that the process of selecting or determining the third pixel point in the basic image by the receiving device, that is, the reverse process of selecting or determining the first pixel point in the original image by the sending device, and it can be understood that the number of the selected first pixel points in the generation process of the image splitting packet is the same as the number of the selected third pixel points in the process of restoring the basic image based on the image splitting packet.
It is to be understood that, when the received image is divided into a plurality of pieces, the receiving apparatus needs to execute the step of determining the third pixel point associated with each matrix point of the image difference matrix in the initial matrix based on the reference point and the matrix point determination algorithm a plurality of times. Until the last image split packet sent by the sending equipment is acquired.
2. And acquiring a reference pixel value of the matrix point, and updating the third pixel value of the third pixel point to the reference pixel value to obtain original image information of the basic image.
Specifically, the receiving device obtains a reference pixel value corresponding to each matrix point in the image difference matrix, and then updates a third pixel value of a third pixel point to the reference pixel value according to the third pixel point corresponding to the matrix point in the image matrix corresponding to the basic image. It can be understood that the original image information is the third pixel value on the third pixel point.
Step S506: displaying the original image information on the base image.
According to some embodiments, the receiving device may sequentially load, on the base image, image pixel values of third pixels determined based on the image unpacking, so as to perform image rendering on the display screen of the base image based on the original image information (the third pixel values on the third pixels).
In the embodiment of the application, a receiving device acquires a basic image and at least one image splitting packet transmitted by a sending device, wherein the basic image and the at least one image splitting packet are generated after image matting processing is performed on an original image by the sending device, then the basic image is displayed, and image information corresponding to the image splitting packet is sequentially displayed on the basic image. By acquiring a base image with a small memory and at least one image split packet after the original image is subjected to image matting and splitting, the receiving equipment can display the base image of the original image after the base image is received for the first time, and display the image information of the received image split packet on the base image in sequence, so that the loading and displaying time during image display is shortened, and the image display speed is increased; in the process of generating the basic image, the original image can be subjected to order reduction processing, so that the memory occupied by the receiving equipment in the received basic image is greatly reduced; the communication quality and the display resolution of the receiving equipment can be brought into the image display process for reference, so that the sending equipment can conveniently determine the corresponding number of image subpackages and basic images for sending, the transmission cost in the image transmission process can be saved, and the actual use scene of the receiving equipment can be better met.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application is shown. The image processing apparatus may be implemented as all or a part of an apparatus by software, hardware, or a combination of both. The apparatus 1 includes an original image matting module 11 and an image data transmission module 13.
An original image matting module 11, configured to perform image matting on an original image to obtain a base image and at least one image unpacking corresponding to the original image;
an image data sending module 12, configured to transmit the base image and the at least one image splitting packet to a receiving device, where the receiving device is configured to display the base image and display image information corresponding to the image splitting packet on the base image.
Optionally, as shown in fig. 8, the original image matting module 11 includes:
a matting position determining unit 111 for determining a first matting position in the original image, and generating an image depacketizing packet and a target image other than the image depacketizing packet based on the first matting position;
the matting position determining unit 111 is further configured to determine a second matting position in the target image when the target image does not satisfy the stop matting condition, take the second matting position as the first matting position, and perform the step of generating an image depacketizing packet and a target image other than the image depacketizing packet based on the first matting position;
an image data generating unit 112, configured to use the target image as a base image when the target image satisfies a stop matting condition, to obtain at least one image split packet and a base image other than the at least one image split packet.
Optionally, as shown in fig. 9, the matting position determining unit 111 includes:
a pixel point selection subunit 1111, configured to determine a reference point in the original image, and determine at least one first pixel point in the original image based on a preset pixel selection algorithm and the reference point;
an image split packet generating subunit 1112, configured to obtain a first pixel value of the at least one first pixel point, and generate an image split packet;
a target image generating sub-unit 1113, configured to generate a target image other than the image subpacket based on the at least one first pixel point and the original image.
Optionally, the image splitting packet generating subunit 1112 is specifically configured to:
acquiring a first pixel value of the at least one first pixel point, and determining position information of each first pixel point in a preset matrix;
adding a first pixel value of the first pixel point to the preset matrix based on the position information to generate an image difference matrix;
and generating an image unpacking packet based on the image difference matrix, the reference point and the pixel selection algorithm.
Optionally, the target image generating subunit 1113 is specifically configured to:
and in the original image, setting the first pixel values of all the first pixel points as target pixel values to obtain a target image except the image subpackage.
Optionally, the image splitting packet generating subunit 1112 is specifically configured to:
in the original image, setting the first pixel values of all the first pixel points as target pixel values, and performing image reduction processing on the original image to obtain a target image except the image split packet.
Optionally, the image splitting packet generating subunit 1112 is specifically configured to:
acquiring a position mapping relation between each pixel point in the target image and the original image and an image size of the original image;
generating image coding information corresponding to the target image based on the position mapping relation and the image size;
the image data sending module 12 is specifically configured to:
and when the target image is the basic image, sequentially transmitting the basic image and the at least one image subpackage to a receiving device, wherein the basic image carries the image coding information.
Optionally, the image data sending module 12 is specifically configured to:
acquiring the communication quality of the receiving equipment and the display resolution of the receiving equipment;
determining a reference number of target image subpackets among the at least one of the image subpackets based on the communication quality and the display resolution;
and sequentially transmitting the basic images and the reference number of target image sub-packets to a receiving device.
It should be noted that, when the image processing apparatus provided in the foregoing embodiment executes the image processing method, only the division of the functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In this embodiment of the present application, a sending device may perform image matting on an original image to obtain a base image and at least one image splitting packet corresponding to the original image, and then transmit the base image and the at least one image splitting packet to a receiving device, where the receiving device is configured to display the base image and sequentially display image information corresponding to the image splitting packet on the base image; the original image is split into the basic image with small memory compared with the original image and the at least one image split packet, and the basic image and the at least one image split packet are sequentially sent to the receiving equipment, so that the receiving equipment can display the basic image of the original image after the basic image is received for the first time, image information of the received image split packet is displayed on the basic image, loading display time during image display is shortened, and image display speed is improved.
Referring to fig. 10, a schematic structural diagram of another image processing apparatus according to an exemplary embodiment of the present application is shown. The image processing apparatus may be implemented as all or a part of an apparatus by software, hardware, or a combination of both. The apparatus 2 includes an image data acquisition module 21 and an image data display module 22.
The image data acquisition module 21 is configured to acquire a base image and at least one image unpacking packet transmitted by a sending device, where the base image and the at least one image unpacking packet are generated after image matting processing is performed on an original image by the sending device;
and an image data display module 22, configured to display the basic image, and display image information corresponding to the image split packet on the basic image.
Optionally, as shown in fig. 11, the image data display module 22 includes:
an image information determining unit 221, configured to obtain the image coding information carried by the base image, and determine, based on the image coding information, a position mapping relationship between at least one second pixel point in the base image and an original image, and an image size of the original image;
and the image data display unit 222 is configured to perform image restoration display on at least one second pixel point in the base image according to the image size and the position mapping relationship.
Optionally, the image data display unit 222 is specifically configured to:
constructing an initial matrix corresponding to the original image according to the image size;
adding at least one second pixel point in the basic image to the initial matrix based on the position mapping relation;
determining at least one third pixel point except the second pixel point in the initial matrix;
calculating a second reference pixel value corresponding to the third pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of at least one second pixel point in the basic image;
and updating the third pixel value of the third pixel point to be the second reference pixel value.
Optionally, the image data display unit 222 is specifically configured to:
acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image subpackage in the at least one image subpackage;
determining fourth pixel points associated with each matrix point of the image difference matrix in an image matrix corresponding to the basic image based on the reference point and the pixel selection algorithm;
and acquiring a reference pixel value of the matrix point, and updating a fourth pixel value of the fourth pixel point to the reference pixel value in an image matrix corresponding to the basic image.
Optionally, the image data display unit 222 is specifically configured to:
acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image subpackage in the at least one image subpackage;
updating the third pixel value of a third pixel point in the initial matrix based on the reference point and the pixel selection algorithm to obtain original image information corresponding to the basic image;
displaying the original image information on the base image.
Optionally, the image data display unit 222 is specifically configured to:
determining the third pixel points associated with each matrix point of the image difference matrix in the initial matrix based on the reference points and the matrix point determination algorithm;
and acquiring a reference pixel value of the matrix point, and updating the third pixel value of the third pixel point to the reference pixel value.
Optionally, the image data obtaining module 21 is specifically configured to:
acquiring the current communication quality and the display resolution of the receiving equipment, and sending the communication quality and the display resolution to the sending equipment;
the image data display module 22 is specifically configured to:
acquiring a base image transmitted by a transmitting device, and acquiring a reference number of image depacketization packets determined by the transmitting device based on the communication quality and the display resolution.
It should be noted that, when the image processing apparatus provided in the foregoing embodiment executes the image processing method, only the division of the functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, a receiving device acquires a basic image and at least one image splitting packet transmitted by a sending device, wherein the basic image and the at least one image splitting packet are generated after image matting processing is performed on an original image by the sending device, then the basic image is displayed, and image information corresponding to the image splitting packet is sequentially displayed on the basic image. By acquiring a base image with a small memory after the original image is subjected to image matting and splitting and at least one image splitting packet, the receiving equipment can display the base image of the original image after the base image is received for the first time, and display the image information of the received image splitting packet on the base image, so that the loading and displaying time during image display is shortened, and the image display speed is improved; in the process of generating the basic image, the original image can be subjected to order reduction processing, so that the memory occupied by the receiving equipment in the received basic image is greatly reduced; the communication quality and the display resolution of the receiving equipment can be brought into the image display process for reference, so that the sending equipment can conveniently determine the corresponding number of image subpackages and basic images for sending, the transmission cost in the image transmission process can be saved, and the actual use scene of the receiving equipment can be better met.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the image processing method according to the embodiment shown in fig. 1 to 6, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 6, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded by the processor and executes the image processing method according to the embodiment shown in fig. 1 to 6, where a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 6, and is not described herein again.
Please refer to fig. 12, which is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 12, the electronic device 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 connects various parts throughout the server 1000 using various interfaces and lines, and performs various functions of the server 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005, and calling data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 12, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a data transmission control application program.
In the electronic device 1000 shown in fig. 12, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to invoke an image processing application stored in the memory 1005 and specifically perform the following operations:
carrying out image matting on an original image to obtain a base image corresponding to the original image and at least one image unpacking;
and transmitting the base image and the at least one image splitting packet to a receiving device, wherein the receiving device is used for displaying the base image and displaying the image information corresponding to the image splitting packet on the base image.
In one embodiment, when the processor 1001 performs the image matting on the original image to obtain a base image corresponding to the original image and at least one image unpacking packet, the following operations are specifically performed:
determining a first matte position in the original image, and generating an image subpackage and a target image except the image subpackage based on the first matte position;
when the target image does not meet the stop matte condition, determining a second matte position in the target image, taking the second matte position as the first matte position, and executing the step of generating an image unpacking packet and a target image except the image unpacking packet based on the first matte position;
and when the target image meets the condition of stopping image matting, taking the target image as a basic image to obtain at least one image unpacking packet and a basic image except the at least one image unpacking packet.
In one embodiment, when the processor 1001 determines the first matting position in the original image, and generates an image unpacking packet and a target image other than the image unpacking packet based on the first matting position, specifically performs the following operations:
determining a reference point in the original image, and determining at least one first pixel point in the original image based on a preset pixel selection algorithm and the reference point;
acquiring a first pixel value of the at least one first pixel point, and generating an image unpacking;
and generating a target image except the image subpackage based on the at least one first pixel point and the original image.
In an embodiment, when the processor 1001 executes the obtaining of the first pixel value of the at least one first pixel point and generates the image unpacking, the following operations are specifically executed:
acquiring a first pixel value of the at least one first pixel point, and determining position information of each first pixel point in a preset matrix;
adding a first pixel value of the first pixel point to the preset matrix based on the position information to generate an image difference matrix;
and generating an image unpacking packet based on the image difference matrix, the reference point and the pixel selection algorithm.
In an embodiment, when the processor 1001 executes the generation of the target image other than the image unpacking based on the at least one first pixel point and the original image, specifically execute the following operations:
and in the original image, setting the first pixel values of all the first pixel points as target pixel values to obtain a target image except the image subpackage.
In one embodiment, when executing the image processing method, the processor 1001 specifically performs the following operations:
acquiring a position mapping relation between each pixel point in the target image and the original image and an image size of the original image;
generating image coding information corresponding to the target image based on the position mapping relation and the image size;
the transmitting the base image and the at least one image subpacket to a receiving device comprises:
and when the target image is the basic image, sequentially transmitting the basic image and the at least one image subpackage to a receiving device, wherein the basic image carries the image coding information.
In one embodiment, when the processor 1001 performs the transmission of the base image and the at least one image subpacket to the receiving device, specifically performs the following operations:
acquiring the communication quality of the receiving equipment and the display resolution of the receiving equipment;
determining a reference number of target image subpackets among the at least one of the image subpackets based on the communication quality and the display resolution;
and transmitting the base image and the reference number of target image split packets to a receiving device.
In one embodiment, the processor 1001, when executing the above, specifically performs the following operations:
in this embodiment of the present application, a sending device may perform image matting on an original image to obtain a base image and at least one image splitting packet corresponding to the original image, and then transmit the base image and the at least one image splitting packet to a receiving device, where the receiving device is configured to display the base image and display image information corresponding to the image splitting packet on the base image; by carrying out image matting and splitting on the original image, splitting the original image into a basic image with a small memory compared with the original image and at least one image splitting packet, and sequentially sending the basic image and the at least one image splitting packet to the receiving equipment, the receiving equipment can conveniently display the basic image of the original image after the basic image is received for the first time, and sequentially display the image information of the received image splitting packet on the basic image, so that the loading display time during image display is shortened, and the image display speed is improved.
Referring to fig. 13, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area may also store data created by the electronic device during use, such as phone books, audio and video data, chat log data, and the like.
Referring to fig. 14, the memory 120 may be divided into an operating system space, where an operating system is run, and a user space, where native and third-party applications are run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 15, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game application, an instant messaging program, a photo beautification program, an image processing program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 16, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 16, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The electronic device of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, video, and the like. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the electronic device shown in fig. 13, where the electronic device may be a terminal, the processor 110 may be configured to call the image processing application stored in the memory 120, and specifically perform the following operations:
acquiring a basic image and at least one image depacketizing packet transmitted by a sending device, wherein the basic image and the at least one image depacketizing packet are generated after the sending device performs image matting on an original image;
and displaying the basic image, and displaying the image information corresponding to the image split packet on the basic image.
In an embodiment, the processor 110 specifically performs the following operations when performing the displaying of the basic image:
determining a first pixel point in the basic image and a second pixel point except the first pixel point;
calculating a first reference pixel value corresponding to the first pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of the second pixel point;
and updating the first pixel value of the first pixel point in the basic image into the first reference pixel value.
In an embodiment, the processor 110 specifically performs the following operations when performing the displaying of the basic image:
acquiring the image coding information carried by the basic image, and determining the position mapping relation between at least one second pixel point in the basic image and the original image and the image size of the original image based on the image coding information;
and performing image restoration display on at least one second pixel point in the basic image according to the image size and the position mapping relation.
In an embodiment, when the processor 110 performs the image restoration display on at least one second pixel point in the base image according to the image size and the position mapping relationship, the following operations are specifically performed:
constructing an initial matrix corresponding to the original image according to the image size;
adding at least one second pixel point in the basic image to the initial matrix based on the position mapping relation;
determining at least one third pixel point except the second pixel point in the initial matrix;
calculating a second reference pixel value corresponding to the third pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of at least one second pixel point in the basic image;
and updating the third pixel value of the third pixel point to be the second reference pixel value.
In an embodiment, when the processor 110 executes the displaying of the image information corresponding to the image splitting packet on the base image, the following steps are specifically executed:
acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image subpackage in the at least one image subpackage;
determining fourth pixel points associated with each matrix point of the image difference matrix in an image matrix corresponding to the basic image based on the reference point and the pixel selection algorithm;
and acquiring a reference pixel value of the matrix point, and updating a fourth pixel value of the fourth pixel point to the reference pixel value in an image matrix corresponding to the basic image.
In an embodiment, when the processor 110 executes the displaying of the image information corresponding to the image splitting packet on the base image, the following steps are specifically executed:
acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image subpackage in the at least one image subpackage;
updating the third pixel value of a third pixel point in the initial matrix based on the reference point and the pixel selection algorithm to obtain original image information corresponding to the basic image;
and sequentially displaying the original image information on the basic image.
In an embodiment, when the processor 110 executes the reference point-based pixel selection algorithm to update the third pixel value of the third pixel point in the initial matrix, the following steps are specifically executed:
determining the third pixel points associated with each matrix point of the image difference matrix in the initial matrix based on the reference points and the matrix point determination algorithm;
and acquiring a reference pixel value of the matrix point, and updating the third pixel value of the third pixel point to the reference pixel value.
In an embodiment, when the processor 110 executes the acquiring of the base image and the at least one image depacketizing transmitted by the sending device, the following steps are specifically executed:
acquiring the current communication quality and the display resolution of the receiving equipment, and sending the communication quality and the display resolution to the sending equipment;
the method includes acquiring a base image transmitted by a transmitting apparatus, and acquiring at least one image depacketization packet of a reference number determined by the transmitting apparatus based on the communication quality and the display resolution.
In the embodiment of the application, a receiving device acquires a basic image and at least one image splitting packet transmitted by a sending device, wherein the basic image and the at least one image splitting packet are generated after image matting is performed on an original image by the sending device, then the basic image is displayed, and image information corresponding to the image splitting packet is displayed on the basic image. By acquiring a base image with a small memory and at least one image split packet after the original image is subjected to image matting and splitting, the receiving equipment can display the base image of the original image after the base image is received for the first time, and display the image information of the received image split packet on the base image in sequence, so that the loading and displaying time during image display is shortened, and the image display speed is increased; in the process of generating the basic image, the original image can be subjected to order reduction processing, so that the memory occupied by the receiving equipment in the received basic image is greatly reduced; the communication quality and the display resolution of the receiving equipment can be brought into the image display process for reference, so that the sending equipment can conveniently determine the corresponding number of image subpackages and basic images for sending, the transmission cost in the image transmission process can be saved, and the actual use scene of the receiving equipment can be better met.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE gate array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (18)

1. An image processing method applied to a transmission apparatus, the method comprising:
carrying out image matting on an original image to obtain a base image corresponding to the original image and at least one image unpacking;
and transmitting the base image and the at least one image splitting packet to a receiving device, wherein the receiving device is used for displaying the base image and displaying the image information corresponding to the image splitting packet on the base image.
2. The method according to claim 1, wherein the image matting processing on the original image to obtain a base image corresponding to the original image and at least one image unpacking packet comprises:
determining a first matte position in the original image, and generating an image subpackage and a target image except the image subpackage based on the first matte position;
when the target image does not meet the stop matte condition, determining a second matte position in the target image, taking the second matte position as the first matte position, and executing the step of generating an image unpacking packet and a target image except the image unpacking packet based on the first matte position;
and when the target image meets the condition of stopping image matting, taking the target image as a basic image to obtain at least one image unpacking packet and a basic image except the at least one image unpacking packet.
3. The method of claim 2, wherein determining a first matte position in the original image, generating an image-tearing package based on the first matte position and a target image other than the image-tearing package, comprises:
determining a reference point in the original image, and determining at least one first pixel point in the original image based on a preset pixel selection algorithm and the reference point;
acquiring a first pixel value of the at least one first pixel point, and generating an image unpacking;
and generating a target image except the image subpackage based on the at least one first pixel point and the original image.
4. The method of claim 3, wherein obtaining the first pixel value of the at least one first pixel point and generating the image split packet comprises:
acquiring a first pixel value of the at least one first pixel point, and determining position information of each first pixel point in a preset matrix;
adding a first pixel value of the first pixel point to the preset matrix based on the position information to generate an image difference matrix;
and generating an image unpacking packet based on the image difference matrix, the reference point and the pixel selection algorithm.
5. The method of claim 3, wherein generating the target image out of the image-split packets based on the at least one first pixel point and the original image comprises:
and in the original image, setting the first pixel values of all the first pixel points as target pixel values to obtain a target image except the image subpackage.
6. The method of claim 4, wherein generating the target image out of the image-split packets based on the at least one first pixel point and the original image comprises:
in the original image, setting the first pixel values of all the first pixel points as target pixel values, and performing image reduction processing on the original image to obtain a target image except the image split packet.
7. The method of claim 6, further comprising:
acquiring a position mapping relation between each pixel point in the target image and the original image and an image size of the original image;
generating image coding information corresponding to the target image based on the position mapping relation and the image size;
the transmitting the base image and the at least one image subpacket to a receiving device comprises:
and when the target image is the basic image, transmitting the basic image and the at least one image split packet to a receiving device, wherein the basic image carries the image coding information.
8. The method of claim 1, wherein transmitting the base image and the at least one image subpacket to a receiving device comprises:
acquiring the communication quality of the receiving equipment and the display resolution of the receiving equipment;
determining a reference number of target image subpackets among the at least one image subpacket based on the communication quality and the display resolution;
and transmitting the base image and the reference number of target image split packets to a receiving device.
9. An image processing method applied to a receiving device, the method comprising:
acquiring a basic image and at least one image depacketizing packet transmitted by a sending device, wherein the basic image and the at least one image depacketizing packet are generated after the sending device performs image matting on an original image;
and displaying the basic image, and displaying the image information corresponding to the image split packet on the basic image.
10. The method of claim 9, wherein the displaying the base image comprises:
determining a first pixel point in the basic image and a second pixel point except the first pixel point;
calculating a first reference pixel value corresponding to the first pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of the second pixel point;
and updating the first pixel value of the first pixel point in the basic image into the first reference pixel value.
11. The method of claim 9, wherein the displaying the base image comprises:
acquiring the image coding information carried by the basic image, and determining the position mapping relation between at least one second pixel point in the basic image and the original image and the image size of the original image based on the image coding information;
and performing image restoration display on at least one second pixel point in the basic image according to the image size and the position mapping relation.
12. The method according to claim 11, wherein performing image restoration display on at least one second pixel point in the base image according to the image size and the position mapping relationship comprises:
constructing an initial matrix corresponding to the original image according to the image size;
adding at least one second pixel point in the basic image to the initial matrix based on the position mapping relation;
determining at least one third pixel point except the second pixel point in the initial matrix;
calculating a second reference pixel value corresponding to the third pixel point by adopting a preset difference value fitting algorithm based on a second pixel value of at least one second pixel point in the basic image;
and updating the third pixel value of the third pixel point to be the second reference pixel value.
13. The method according to claim 10, wherein the displaying image information corresponding to the image splitting packet on the base image comprises:
acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image subpackage in the at least one image subpackage;
determining fourth pixel points associated with each matrix point of the image difference matrix in an image matrix corresponding to the basic image based on the reference point and the pixel selection algorithm;
and acquiring a reference pixel value of the matrix point, and updating a fourth pixel value of the fourth pixel point to the reference pixel value in an image matrix corresponding to the basic image.
14. The method according to claim 12, wherein the displaying image information corresponding to the image splitting packet on the base image comprises:
acquiring an image difference matrix, the reference points and a pixel selection algorithm of each image subpackage in the at least one image subpackage;
updating the third pixel value of a third pixel point in the initial matrix based on the reference point and the pixel selection algorithm to obtain original image information corresponding to the basic image;
displaying the original image information on the base image.
15. The method of claim 14, wherein said updating the third pixel value of a third pixel point in the initial matrix based on the fiducial and the pixel selection algorithm comprises
Determining the third pixel points associated with each matrix point of the image difference matrix in the initial matrix based on the reference points and the matrix point determination algorithm;
and acquiring a reference pixel value of the matrix point, and updating the third pixel value of the third pixel point to the reference pixel value.
16. The method of claim 14, wherein obtaining the base image and the at least one image subpacket transmitted by the transmitting device comprises:
acquiring the current communication quality and the display resolution of the receiving equipment, and sending the communication quality and the display resolution to the sending equipment;
acquiring a base image transmitted by a transmitting device, and acquiring a reference number of image depacketization packets determined by the transmitting device based on the communication quality and the display resolution.
17. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1 to 8 and 9 to 16.
18. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-8 and 9-16.
CN202010721960.2A 2020-07-24 2020-07-24 Image processing method, device, storage medium and electronic equipment Active CN111857515B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010721960.2A CN111857515B (en) 2020-07-24 2020-07-24 Image processing method, device, storage medium and electronic equipment
PCT/CN2021/095553 WO2022016981A1 (en) 2020-07-24 2021-05-24 Image processing methods and apparatus, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010721960.2A CN111857515B (en) 2020-07-24 2020-07-24 Image processing method, device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111857515A true CN111857515A (en) 2020-10-30
CN111857515B CN111857515B (en) 2024-03-19

Family

ID=72950237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010721960.2A Active CN111857515B (en) 2020-07-24 2020-07-24 Image processing method, device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN111857515B (en)
WO (1) WO2022016981A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022016981A1 (en) * 2020-07-24 2022-01-27 深圳市欢太科技有限公司 Image processing methods and apparatus, storage medium, and electronic device
CN115150390A (en) * 2022-06-27 2022-10-04 山东信通电子股份有限公司 Image display method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100054579A1 (en) * 2006-11-08 2010-03-04 Tokyo Institute Of Technology Three-dimensional surface generation method
CN105451019A (en) * 2015-11-25 2016-03-30 中国地质大学(武汉) Image compression transmission method facing wireless video sensor network
CN110148102A (en) * 2018-02-12 2019-08-20 腾讯科技(深圳)有限公司 Image composition method, ad material synthetic method and device
CN110475044A (en) * 2019-08-05 2019-11-19 Oppo广东移动通信有限公司 Image transfer method and device, electronic equipment, computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410299B (en) * 2017-08-15 2022-03-11 腾讯科技(深圳)有限公司 Information processing method and device and computer storage medium
CN111857515B (en) * 2020-07-24 2024-03-19 深圳市欢太科技有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100054579A1 (en) * 2006-11-08 2010-03-04 Tokyo Institute Of Technology Three-dimensional surface generation method
CN105451019A (en) * 2015-11-25 2016-03-30 中国地质大学(武汉) Image compression transmission method facing wireless video sensor network
CN110148102A (en) * 2018-02-12 2019-08-20 腾讯科技(深圳)有限公司 Image composition method, ad material synthetic method and device
CN110475044A (en) * 2019-08-05 2019-11-19 Oppo广东移动通信有限公司 Image transfer method and device, electronic equipment, computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022016981A1 (en) * 2020-07-24 2022-01-27 深圳市欢太科技有限公司 Image processing methods and apparatus, storage medium, and electronic device
CN115150390A (en) * 2022-06-27 2022-10-04 山东信通电子股份有限公司 Image display method, device, equipment and medium
CN115150390B (en) * 2022-06-27 2024-04-09 山东信通电子股份有限公司 Image display method, device, equipment and medium

Also Published As

Publication number Publication date
CN111857515B (en) 2024-03-19
WO2022016981A1 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
EP3751418B1 (en) Resource configuration method and apparatus, terminal, and storage medium
US10564920B2 (en) Dynamic server-side image sizing for fidelity improvements
CN109525853B (en) Live broadcast room cover display method and device, terminal, server and readable medium
CN111476871B (en) Method and device for generating video
CN108173742B (en) Image data processing method and device
CN113676741B (en) Data transmission method and device, storage medium and electronic equipment
CN110658961B (en) Information display method and device and electronic equipment
CN111078172B (en) Display fluency adjusting method and device, electronic equipment and storage medium
CN107436712B (en) Method, device and terminal for setting skin for calling menu
WO2022016981A1 (en) Image processing methods and apparatus, storage medium, and electronic device
CN113117326B (en) Frame rate control method and device
CN111914149A (en) Request processing method and device, storage medium and electronic equipment
CN113923515A (en) Video production method and device, electronic equipment and storage medium
CN111679811B (en) Web service construction method and device
CN113521728A (en) Cloud application implementation method and device, electronic equipment and storage medium
CN112084959A (en) Crowd image processing method and device
CN113965779A (en) Cloud game data transmission method, device and system and electronic equipment
CN110996164A (en) Video distribution method and device, electronic equipment and computer readable medium
CN115328725A (en) State monitoring method and device, storage medium and electronic equipment
CN111770510B (en) Network experience state determining method and device, storage medium and electronic equipment
CN112614049A (en) Image processing method, image processing device, storage medium and terminal
CN112597022A (en) Remote diagnosis method, device, storage medium and electronic equipment
CN114125048B (en) Message push setting method and device, storage medium and electronic equipment
CN115314588B (en) Background synchronization method, device, terminal, equipment, system and storage medium
WO2022089512A1 (en) Load control method and apparatus, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant