WO2024193523A1 - Procédé de traitement d'image basé sur une collaboration d'extrémité à nuage et appareil associé - Google Patents

Procédé de traitement d'image basé sur une collaboration d'extrémité à nuage et appareil associé Download PDF

Info

Publication number
WO2024193523A1
WO2024193523A1 PCT/CN2024/082306 CN2024082306W WO2024193523A1 WO 2024193523 A1 WO2024193523 A1 WO 2024193523A1 CN 2024082306 W CN2024082306 W CN 2024082306W WO 2024193523 A1 WO2024193523 A1 WO 2024193523A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
differential
refined
region
electronic device
Prior art date
Application number
PCT/CN2024/082306
Other languages
English (en)
Chinese (zh)
Inventor
邹益人
李林锋
沈毅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024193523A1 publication Critical patent/WO2024193523A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present application relates to the field of electronic technology, and in particular to a method and related apparatus for image processing in collaboration with a terminal and a cloud.
  • the embodiment of the present application provides an end-cloud collaborative image processing method that can effectively reduce the data traffic of image transmission and reduce processing delay, and collaborate with the cloud to achieve efficient image processing, so that users can view the cloud-retouched images in a timely manner on the end side, effectively improving the user experience.
  • an embodiment of the present application provides a cloud camera processing method, the system comprising an electronic device and a server, the method comprising: the electronic device acquires a first image to be processed; the electronic device sends a first area map of the first image to the server through a first request message; the first area map includes images within a part or all area of the first image; the electronic device acquires a second image to be processed, and the second area map includes images within a part or all area of the second image; the electronic device sends a first differential image to the server through a second request message, the first differential image being a differential image between the second area map of the second image and the first area map of the first image; the server restores the second area map based on the first differential image and the first area map; the server performs image processing on the second area map to obtain a refined image of the second area map; the server sends a second response message to the electronic device, the second response message being used to indicate the refined image of the second area map; the electronic device determines the refined image
  • the terminal side can upload the differential image of the image and the first image, and the first image has been uploaded to the cloud before the second image or uploaded to the cloud at the same time as the second image; the cloud (i.e., the server) restores the second image based on the differential image and refines the second image.
  • the computing power requirements and load on the terminal side can be reduced; compared with uploading the second image, uploading the differential image can reduce the amount of data transmitted between the terminal side and the cloud, thereby effectively reducing the data flow of image transmission and reducing processing delays, and collaborating with the cloud to achieve efficient image processing, so that users can view the cloud-refined images in a timely manner on the terminal side, effectively improving the user experience.
  • the second response message includes a second differential image, which is a differential image between a refined image of the second region image and the second region image; the method further includes: the electronic device restores the refined image of the second region image based on the second differential image and the second region image.
  • the cloud sends a differential image of the second region image before and after refinement to the end side, and the end side can restore the refined image of the second region image based on the differential image.
  • sending the differential image can reduce the amount of data transmitted between the end side and the cloud side, thereby effectively reducing the data traffic of the image transmission and reducing the processing delay, so that the user can view the refined image on the cloud side in a timely manner on the end side.
  • the method further includes: the server performs image processing on the first area image to obtain a refined image of the first area image; the server sends a first response message to the electronic device, the first response message is used to indicate the refined image of the first area image; the electronic device determines the refined image of the first image based on the refined image of the first area image.
  • the cloud sends a differential image of the first area image before and after the refinement to the end side, and the end side can restore the refined image of the first area image based on the differential image.
  • sending the differential image can reduce the amount of data transmitted between the end side and the cloud side, thereby effectively reducing the data traffic of the image transmission and reducing the processing delay, so that the user can view the refined image on the cloud side in time on the end side.
  • the electronic device is provided with a continuous shooting function, and the first image and the second image are two images taken under the continuous shooting function at one time; the method further comprises: the electronic device detects a first input operation for starting the continuous shooting function; the electronic device obtains the first image to be processed Image, the electronic device obtains the second image to be processed, including: in response to the first input operation, the electronic device obtains the first image and the second image of the continuous shooting; the first image is the first image of the continuous shooting, and the second image is the non-first image of the continuous shooting.
  • the similarity of the images continuously shot by the continuous shooting function is usually high; in the scenario where the continuous shooting function is used to shoot images, the terminal side can upload the first image of the continuous shooting to the cloud; for the non-first image of the continuous shooting, such as the second image, the terminal side can upload the differential image of the image and the first image to the cloud. Compared with directly uploading the second image, uploading the differential image can effectively reduce the data traffic of the image transmission and reduce the processing delay.
  • the electronic device acquires a first image to be processed, including: in response to a detected first shooting instruction, a camera of the electronic device captures the first image; the electronic device acquires a second image to be processed, including: in response to a detected second shooting instruction, a camera of the electronic device captures the second image; the shooting time interval between the second image and the first image is less than a time threshold, and/or the similarity between the second image and the first image is greater than a similarity threshold.
  • the terminal side can upload a differential image of the image and the first image to the cloud. Compared with directly uploading the second image, uploading the differential image can effectively reduce the data traffic of image transmission and reduce processing delays.
  • the method further includes: the electronic device obtains a third image to be processed; the electronic device sends a third differential image to the server through a third request message, the third differential image is a differential image between a third area map of the third image and a first area map of the first image; the third area map includes images in part or all of the area of the third image; the server restores the third area map according to the third differential image and the first area map; the server performs image processing on the third area map to obtain a refined image of the third area map; the server sends a third response message to the electronic device, the third response message is used to indicate the refined image of the third area map; the electronic device determines the refined image of the third image based on the refined image of the third area map.
  • the reference image (i.e., the first image) used for differential of the third image is an image that has been uploaded to the cloud without differential images.
  • the end side after uploading the first image to the cloud, for other images to be processed, the end side uploads the differential image of the image to be processed and the first image to the cloud.
  • the end side uploads the differential image of the image and the first image of the continuous shooting to the cloud.
  • the method further includes: the electronic device obtains a third image to be processed; the electronic device sends a third differential image to the server through a third request message, the third differential image is a differential image between a third area map of the third image and a second area map of the second image; the third area map includes images in part or all of the area of the third image; the server restores the third area map according to the third differential image and the second area map; the server performs image processing on the third area map to obtain a refined image of the third area map; the server sends a third response message to the electronic device, the third response message is used to indicate the refined image of the third area map; the electronic device determines the refined image of the third image based on the refined image of the third area map.
  • the reference image (i.e., the second image) used for differential of the third image is an image that has uploaded the corresponding differential image to the cloud.
  • the end side uploads the differential image of the third image and the previous image to be processed of the third image to the cloud.
  • the end side uploads the differential image of the image and the previous image of the continuous shooting to the cloud.
  • the terminal side may upload to the cloud a differential image between the third image and any image that has been uploaded to the cloud.
  • the electronic device sends the first differential image to the server through the second request message, including: when the data volume of the first differential image is less than the data volume of the second area map, the electronic device sends the first differential image and the image identifier of the first area map to the server through the first request message; when the data volume of the second differential image is less than the data volume of the refined image of the second area map, the second response message includes the second differential image and the image identifier of the second area map, and the second differential image is the differential image between the refined image of the second area map and the second area map; when the data volume of the second differential image is less than the data volume of the refined image of the second area map, the second response message includes the refined image of the second area map.
  • the end side/cloud side differentiates a specific image
  • the differential image is sent; otherwise, the above-mentioned specific image is sent directly. In this way, it can be ensured that the data volume of the image transmitted between the end side and the cloud side is as small as possible, thereby effectively reducing the data traffic of the image transmission and reducing the processing delay.
  • the regional map of the image includes the image content in all areas of the image; when the second image refinement mode is the full-image refinement mode, the second image and the second regional map are the same, and the refined map of the second regional map is the refined map of the second image.
  • the refinement mode of the image to be processed is identified, and for the image to be processed in the subject refinement mode, the target subject map (i.e., the regional map) is cut out from the image to be processed, and only the differential image corresponding to the target subject map/target subject map is sent to the cloud. In this way, for the subject refinement mode, the amount of data uploaded and returned between the end and the cloud is further reduced, thereby reducing the network traffic and processing delay for transmitting images.
  • the method further includes: the electronic device obtains the position of the second region map in the second image, and The second region image is cropped from the second image; the electronic device determines the refined image of the second image based on the refined image of the second region image, including: based on the position of the second region image in the second image, the electronic device replaces the second region image in the second image with the refined image of the second region image to obtain the refined image of the second image.
  • the refined image of the target subject image i.e., the region image
  • the refined image of the target subject image is used to replace the target subject image in the image to be processed, so that the refined image of the image to be processed can be obtained.
  • the first request message includes a first photo retouching parameter, and the first photo retouching parameter is used to indicate at least one image processing method adopted for the second image; the above-mentioned server performs image processing on the second area image of the second image to obtain a refined image of the second area image, including: the server performs image processing on the second image according to at least one image processing method indicated by the first photo retouching parameter to obtain a refined image of the second area image.
  • the electronic device pre-stores a correspondence between multiple image processing methods and refinement modes, the refinement modes include a full image refinement mode and a subject refinement mode; the multiple image processing methods include multiple shooting modes, the multiple shooting modes include a portrait mode, and the refinement mode corresponding to the portrait mode is a subject refinement mode; the image processing method used for the image is used to determine the refinement mode of the image.
  • the refinement mode of the image can be determined according to the image processing method used for the image to be processed.
  • the first request message also includes an image identifier of the first area map
  • the second request message also includes a reference image identifier and a source image identifier of the first differential image, the reference image of the first differential image is the first area map, and the source image of the first differential image is the second area map
  • the server restores the second area map based on the first differential image and the first area map, including: the server restores the second area map based on the first differential image and the first area map indicated by the reference image identifier, and determines that the image identifier of the restored second area map is the source image identifier
  • the second response message includes the image identifier of the second area map.
  • the reference image identifier of the differential image is uploaded so that the cloud can determine the reference image for restoring the second area map;
  • the source image (i.e., the second area map) identifier of the differential image is uploaded so that the cloud can determine the image identifier of the restored second area map.
  • the second request message also includes an algorithm identifier, which indicates a first restoration algorithm; the server restores the second area map based on the first differential image and the first area map, including: the server restores the second area map based on the first differential image, the first area map and the first restoration algorithm.
  • the image type includes a differential image and a non-differential image
  • the first request message also includes a first image type, and the first image type is used to indicate that the first area map is a non-differential image
  • the second request message also includes a second image type, and the second image type is used to indicate that the first differential image is a differential image
  • the server restores the second area map based on the first differential image and the first area map, including: when determining that the second image type is a differential image, the server restores the second area map based on the first differential image and the first area map.
  • the image type when uploading an image, the image type is also uploaded so that the cloud can determine whether differential restoration is required for the image uploaded on the end side.
  • the electronic device sends a first area map of the first image to the server via a first request message, including: when the electronic device determines that the first image is not a non-first image in a continuous shooting scenario, the electronic device sends a first area map within the refined image area of the first image to the server via the first request message; the electronic device sends a first differential image to the server via a second request message, including: when the electronic device determines that the second image is a non-first image in a continuous shooting scenario, the electronic device sends the first differential image to the server via the second request message.
  • the second image when the second image satisfies any one of the three conditions, the second image is a non-first image in the continuous shooting scenario, and the above three conditions include: the second image is a non-first photo taken by the continuous shooting function; the shooting time interval between the second image and the first image is less than a time threshold; the similarity between the second image and the first image is greater than a similarity threshold.
  • the refinement mode of the second image is the subject refinement mode.
  • the electronic device pre-stores a correspondence between multiple subject types and refinement modes, and a first subject type among the multiple subject types corresponds to a subject refinement mode; determining the refinement mode of the second image as the subject refinement mode includes: identifying that the target subject of the second image is the first subject type, and determining that the refinement mode of the second image is the subject refinement mode corresponding to the first subject type.
  • the above-mentioned multiple image processing methods include some or all of the following: multiple shooting modes, multiple filters, multiple character special effects, multiple beauty options, high-definition restoration and high-definition portraits.
  • the first image and the second image are preview images captured by a camera of the electronic device; the method further includes: the electronic device displays a refined image of the first image in a preview frame; the electronic device displays a refined image of the second image in the preview frame.
  • the method also includes: when multiple images are selected, a second input operation is detected, and the second input operation is used to trigger image processing of the multiple images; the multiple images include a first image and a second image; the above-mentioned electronic device obtains the first image to be processed, and the electronic device obtains the second image to be processed, including: in response to the second input operation, the electronic device obtains the first image to be processed and obtains the second image to be processed.
  • an embodiment of the present application provides a device-cloud collaborative image processing method, which is applied to an electronic device, and the method includes: The sub-device obtains a first image to be processed; the electronic device sends a first area map of the first image to a server through a first request message; the first area map includes images within a part or all area of the first image; the electronic device obtains a second image to be processed, and the second area map includes images within a part or all area of the second image; the electronic device sends a first differential image to the server through a second request message, and the first differential image is a differential image between the second area map of the second image and the first area map of the first image; the second request message is used to indicate the restoration of the second area map according to the first differential image, and to obtain a refined image of the second area map; the electronic device receives a second response message sent by the server, and the second response message is used to indicate the refined image of the second area map; the electronic device determines the refined image of the second image
  • the terminal side can upload the differential image of the image and the first image, and the first image has been uploaded to the cloud before the second image or uploaded to the cloud at the same time as the second image; the cloud (i.e., the server) restores the second image based on the differential image and refines the second image.
  • the computing power requirements and load on the terminal side can be reduced; compared with uploading the second image, uploading the differential image can reduce the amount of data transmitted between the terminal side and the cloud, thereby effectively reducing the data flow of image transmission and reducing processing delays, and collaborating with the cloud to achieve efficient image processing, so that users can view the cloud-refined images in a timely manner on the terminal side, effectively improving the user experience.
  • the second response message includes a second differential image, which is a differential image between a refined image of the second region image and the second region image; the method also includes: the electronic device restoring the refined image of the second region image based on the second differential image and the second region image.
  • the first request message is used to instruct the server to obtain a refined image of the first area image; the method also includes: the electronic device receives a first response message sent by the server, the first response message is used to indicate the refined image of the first area image; the electronic device determines the refined image of the first image based on the refined image of the first area image.
  • the electronic device is provided with a continuous shooting function, and the first image and the second image are two images taken in one continuous shooting function; the method further includes: the electronic device detects a first input operation for starting the continuous shooting function; the electronic device obtains the first image to be processed, and the electronic device obtains the second image to be processed, including: in response to the first input operation, the electronic device obtains the first image and the second image of the continuous shooting; the first image is the first image of the continuous shooting, and the second image is a non-first image of the continuous shooting.
  • the electronic device acquires a first image to be processed, including: in response to a detected first shooting instruction, a camera of the electronic device captures the first image; the electronic device acquires a second image to be processed, including: in response to a detected second shooting instruction, a camera of the electronic device captures the second image; a shooting time interval between the second image and the first image is less than a time threshold, and/or a similarity between the second image and the first image is greater than a similarity threshold.
  • the method also includes: the electronic device obtains a third image to be processed; the electronic device sends a third differential image to the server through a third request message, the third differential image being a differential image between a third area image of the third image and a first area image of the first image; the third area image includes an image within a partial or entire area of the third image; the third request message is used to indicate restoration of the third area image based on the third differential image, and to obtain a refined image of the third area image; the electronic device receives a third response message sent by the server, the third response message being used to indicate a refined image of the third area image; the electronic device determines a refined image of the third image based on the refined image of the third area image.
  • the method also includes: the electronic device obtains a third image to be processed; the electronic device sends a third differential image to the server through a third request message, the third differential image being a differential image between a third area image of the third image and a second area image of the second image; the third area image includes an image within a partial or entire area of the third image; the third request message is used to indicate restoration of the third area image based on the third differential image, and to obtain a refined image of the third area image; the electronic device receives a third response message sent by the server, the third response message being used to indicate a refined image of the third area image; the electronic device determines a refined image of the third image based on the refined image of the third area image.
  • the electronic device sends the first differential image to the server via a second request message, including: when the data amount of the first differential image is less than the data amount of the second area image, the electronic device sends the first differential image and the image identifier of the first area image to the server via the first request message; when the data amount of the second differential image is less than the data amount of the refined image of the second area image, the second response message includes the second differential image and the image identifier of the second area image, and the second differential image is a differential image between the refined image of the second area image and the second area image; when the data amount of the second differential image is less than the data amount of the refined image of the second area image, the second response message includes the refined image of the second area image.
  • the regional map of the image includes the image content in the entire area of the image; when the refinement mode of the second image is the full image refinement mode, the second image and the second regional map are the same, and the refined map of the second regional map is the refined map of the second image.
  • the method when the image refinement mode is the subject refinement mode, the region map of the image includes the target subject in the image, and the size of the region map of the image is smaller than the size of the image; when the second image refinement mode is the subject refinement mode, before the electronic device sends the first differential image to the server through the second request message, the method also includes: the electronic device obtains the position of the second region map in the second image, and crops the second region map from the second image; the above-mentioned electronic device determines the refined image of the second image based on the refined image of the second region map, including: based on the position of the second region map in the second image, the electronic device replaces the second region map in the second image with the refined image of the second region map, and obtains the second region map. Refinement of the image.
  • the first request message includes a first photo retouching parameter, which is used to indicate at least one image processing method used for the second image; the first request message is used to request the server to perform image processing on the second image according to at least one image processing method indicated by the first photo retouching parameter to obtain a refined image of the second area image.
  • the electronic device pre-stores a correspondence between multiple image processing methods and refinement modes, the refinement modes including a full-image refinement mode and a subject refinement mode;
  • the above-mentioned multiple image processing methods include multiple shooting modes, the above-mentioned multiple shooting modes include a portrait mode, and the refinement mode corresponding to the portrait mode is the subject refinement mode;
  • the image processing method adopted by the image is used to determine the refinement mode of the image.
  • the first request message also includes an image identifier of the first area map
  • the second request message also includes a reference image identifier and a source image identifier of the first differential image, the reference image of the first differential image is the first area map, and the source image of the first differential image is the second area map
  • the second response message includes the image identifier of the second area map.
  • the second request message further includes an algorithm identifier, where the algorithm identifier is used to indicate that the restoration algorithm used by the server to restore the second regional map is the first restoration algorithm.
  • an embodiment of the present application provides an end-cloud collaborative image processing method, which is applied to a server, and the method includes: the server receives a first area map of a first image sent by an electronic device through a first request message; the first area map includes images within a partial or entire area of the first image; the server receives a first differential image sent by the electronic device through a second request message, the first differential image is a differential image between the second area map and the first area map; the second area map includes images within a partial or entire area of the second image; the server restores the second area map based on the first differential image and the first area map; the server performs image processing on the second area map to obtain a refined image of the second area map; the server sends a second response message to the electronic device, the second response message is used to indicate the refined image of the second area map; the refined image of the second area map is used to determine the refined image of the second image.
  • the terminal side can upload the differential image of the image and the first image, and the first image has been uploaded to the cloud before the second image or uploaded to the cloud at the same time as the second image; the cloud (i.e., the server) restores the second image based on the differential image and refines the second image.
  • the computing power requirements and load on the terminal side can be reduced; compared with uploading the second image, uploading the differential image can reduce the amount of data transmitted between the terminal side and the cloud, thereby effectively reducing the data flow of image transmission and reducing processing delays, and collaborating with the cloud to achieve efficient image processing, so that users can view the cloud-refined images in a timely manner on the terminal side, effectively improving the user experience.
  • the second response message includes a second differential image, which is a differential image between a refined image of the second area image and the second area image; the second response message is used to instruct the electronic device to restore the refined image of the second area image based on the second differential image and the second area image.
  • the method also includes: the server performs image processing on the first area image to obtain a refined image of the first area image; the server sends a first response message to the electronic device, the first response message is used to indicate the refined image of the first area image; the refined image of the first area image is used to determine the refined image of the first image.
  • the method also includes: the server receives a third differential image sent by the electronic device through a third request message, the third differential image is a differential image between the third area map and the first area map; the third area map includes an image within a part or all of the area of the third image; the server restores the third area map based on the third differential image and the first area map; the server performs image processing on the third area map to obtain a refined image of the third area map; the server sends a third response message to the electronic device, the third response message is used to indicate the refined image of the third area map; the refined image of the third area map is used to determine the refined image of the third image.
  • the method also includes: the server receives a third differential image sent by the electronic device through a third request message, the third differential image is a differential image between the third area map and the second area map; the third area map includes an image within a part or all of the area of the third image; the server restores the third area map based on the third differential image and the second area map; the server performs image processing on the third area map to obtain a refined image of the third area map; the server sends a third response message to the electronic device, the third response message is used to indicate the refined image of the third area map; the refined image of the third area map is used to determine the refined image of the third image.
  • the second response message when the data amount of the second differential image is smaller than the data amount of the refined image of the second area image, the second response message includes the second differential image and the image identifier of the second area image, and the second differential image is a differential image between the refined image of the second area image and the second area image; when the data amount of the second differential image is smaller than the data amount of the refined image of the second area image, the second response message includes the refined image of the second area image.
  • the first request message includes a first photo retouching parameter, and the first photo retouching parameter is used to indicate at least one image processing method adopted for the second image; the above-mentioned server performs image processing on the second area image of the second image to obtain a refined image of the second area image, including: the server performs image processing on the second image according to at least one image processing method indicated by the first photo retouching parameter to obtain a refined image of the second area image.
  • the first request message also includes an image identifier of the first area map
  • the second request message also includes a reference image identifier and a source image identifier of the first differential image, the reference image of the first differential image is the first area map, and the source image of the first differential image is the second area map
  • the server restores the second area map based on the first differential image and the first area map, including: the server restores the second area map based on the first differential image and the first area map indicated by the reference image identifier, and determines that the image identifier of the restored second area map is the source image identifier
  • the second response message includes the image identifier of the second area map.
  • the second request message also includes an algorithm identifier, which indicates a first restoration algorithm; the server restores the second area map based on the first differential image and the first area map, including: the server restores the second area map based on the first differential image, the first area map and the first restoration algorithm.
  • an embodiment of the present application provides an electronic device, comprising: a processor and a memory, wherein the memory is coupled to the processor, the memory is used to store computer program code, the computer program code comprises computer instructions, and when the processor reads the computer instructions from the memory, the electronic device executes the end-cloud collaborative image processing method described in the second aspect.
  • an embodiment of the present application provides a server, comprising: a processor and a memory, wherein the memory is coupled to the processor, the memory is used to store computer program code, the computer program code comprises computer instructions, and when the processor reads the computer instructions from the memory, the electronic device executes the end-cloud collaborative image processing method described in the third aspect.
  • an embodiment of the present application provides a computer-readable storage medium, comprising computer instructions.
  • the computer instructions are executed on a device (e.g., the electronic device of the fourth aspect, the server of the fifth aspect), the device executes the end-cloud collaborative image processing method described in any implementation of any of the above aspects.
  • FIG1 is a schematic diagram of a system architecture of a communication system provided in an embodiment of the present application.
  • FIGS. 2A to 2C are image processing flow charts for continuous shooting scenarios provided by embodiments of the present application.
  • 3A to 3L are user interfaces related to the continuous shooting scenario provided in an embodiment of the present application.
  • 4A to 4F are user interfaces related to the continuous shooting scenario provided by an embodiment of the present application.
  • 5A to 5H are user interfaces related to the continuous shooting scenario provided in an embodiment of the present application.
  • 6A to 6D are user interfaces related to the continuous shooting scenario provided in an embodiment of the present application.
  • FIG7A is a flow chart of a method for image processing in a terminal-cloud collaboration according to an embodiment of the present application
  • FIG7B is a flow chart of a method for determining a non-first image in a continuous shooting scene provided by an embodiment of the present application
  • FIG8 is a diagram of a device architecture provided in an embodiment of the present application.
  • FIG9A is a flow chart of a method for image processing in a terminal-cloud collaborative manner provided in an embodiment of the present application
  • FIG9B is a schematic diagram of an image involved in the end-cloud collaborative image processing method provided in an embodiment of the present application.
  • FIG10A is a flow chart of a method for image processing in a terminal-cloud collaboration according to an embodiment of the present application
  • FIG10B is a schematic diagram of an image involved in the end-cloud collaborative image processing method provided in an embodiment of the present application.
  • FIG11 is a schematic diagram of the structure of a terminal device provided in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of the structure of the server provided in an embodiment of the present application.
  • first and second are used for descriptive purposes only and are not to be understood as suggesting or implying relative importance or implicitly indicating the number of technical features indicated.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features, and in the description of the embodiments of the present application, unless otherwise specified, “multiple” means two or more.
  • GUI graphical user interface
  • Fig. 1 exemplarily shows the system architecture of a communication system 10 provided in an embodiment of the present application.
  • the communication system 10 includes a terminal device 100 and a server 200.
  • the terminal device 100 can communicate with the server 200 through a communication network.
  • the above-mentioned communication networks may include local area networks (LAN) and/or wide area networks (WAN).
  • the communication network can be implemented using any known network communication protocol, which can be various wired or wireless communication protocols, such as Ethernet, universal serial bus (USB), FIREWIRE, global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), Bluetooth, wireless fidelity (Wi-Fi), NFC, voice over Internet protocol (VoIP), communication protocols supporting network slicing architecture, or any other suitable communication protocols.
  • USB universal serial bus
  • FIREWIRE global system for mobile communications
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • Bluetooth wireless fidelity
  • the terminal device 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (PDA), an augmented reality (AR) device, a virtual reality (VR) device, an artificial intelligence (AI) device, a wearable device (smart bracelet), a vehicle-mounted device, a smart home device (smart TV, smart screen, large screen device, etc.) and/or a smart city device.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearable device smart bracelet
  • vehicle-mounted device smart home device
  • smart home device smart home device
  • smart home device smart home device
  • smart home device smart home device
  • smart home device smart home device
  • smart home device smart home device
  • the server 200 may be a single server, or a server cluster consisting of multiple servers, or a cloud computing center.
  • the server 200 involved in the embodiment of the present application may also be referred to as a cloud server or a cloud.
  • Figure 1 is merely a schematic diagram of the system structure of the communication system provided in an embodiment of the present application, and does not constitute a specific limitation on the communication system 10.
  • the communication system 10 may include more or fewer devices than shown in the figure. For example, it may also include wireless relay equipment and wireless backhaul equipment (not shown in Figure 1), which are not limited here.
  • the terminal device 100 can upload image a to the cloud for refinement, and the cloud then transmits the refined image a back to the terminal side.
  • the above-mentioned image a may be a photo a taken in real time by the terminal device 100.
  • the terminal device 100 may automatically upload the taken photo a to the cloud for retouching, and the cloud will then transmit the retouched photo a back to the end side; when the user views the photo a, the terminal device will display the retouched photo a transmitted back from the cloud to the user.
  • a camera that supports automatically transmitting photos to the cloud for retouching when taking photos on the end side may be referred to as a cloud camera or a cloud photo mode.
  • the time-consuming steps include: uploading photos, image processing (i.e., retouching) on the cloud, and transmitting back the retouched photos.
  • the time consumption of uploading photos and transmitting back the retouched photos is limited by the network speed and the size of the photos.
  • the image data volume of the photos is large and/or the network speed is average, it takes a long time; in particular, it takes longer to upload photos due to the limitation of uplink bandwidth of mobile networks (such as the fifth generation mobile communication technology (5th Generation Mobile Communication Technology, 5G)).
  • the 5G uplink bandwidth of 90% of users is 21Mbps. It takes 2.28S to transmit a 6MB photo to the cloud, and 0.53s to transmit the retouched photo back to the end side.
  • the above image processing method has the following problems: the amount of data uploaded to the client and the amount of data transmitted back to the cloud is high, which will lead to high data traffic consumption on the client and high image processing latency, which in turn causes users to be unable to view the refined images in time.
  • the real-time cloud photography mode has higher requirements for image processing latency, and the above problems lead to a poor user experience.
  • the continuous shooting scenario can be a scenario of taking photos at a high frequency; in this scenario, the terminal device 100 takes multiple photos continuously, such as the first to nth photos of the continuous shooting, and refines the above multiple photos. For example, see the end-cloud collaborative image processing process in the continuous shooting scenario shown in Figure 2A; in the continuous shooting scenario, the image processing delays of multiple photos are superimposed, the total processing time is extended, and the user experience is poor.
  • the end side uploads multiple images to the cloud; for image a in a non-continuous shooting scenario or the first image a in a continuous shooting scenario (for example, the first photo of the continuous shooting shown in FIG2B ), the end side sends the complete image of image a to the cloud; the cloud side returns the difference image of image a before and after refinement to the end side; the end side can restore the refined image of image a based on the difference image and image a.
  • the end side The difference image of image b and image c (for example, the i-th photo of the continuous shooting shown in FIG2B, where i is less than n and greater than or equal to 1) is sent to the cloud; the cloud can restore image b based on the difference image and image c, and send back the difference image of image b before and after the refinement to the end side; the end side can restore the refined image of image b based on the difference image and image b.
  • image c is the reference image used for the difference of image b
  • image c is an image that has been uploaded to the cloud before image b, or an image uploaded to the cloud at the same time as image b.
  • image c can be any image that has been uploaded to the cloud.
  • the terminal device 100 uploads the complete image of the first photo to the cloud for fine-tuning; the terminal device 100 uploads the differential image of the second photo and the first photo to the cloud, and the cloud can restore the second photo based on the differential image and the first photo, and then retouch the second photo. In this way, the cloud has acquired the first photo and the second photo.
  • the terminal device 100 can upload the differential image of the third photo and the first photo to the cloud, and the cloud can restore the third photo based on the differential image and the first photo; or, the terminal device 100 can upload the differential image of the third photo and the second photo to the cloud, and the cloud can restore the third photo based on the differential image and the second photo.
  • the cloud has obtained any photo before the nth photo, such as the ith photo; the terminal device 100 can upload the differential image of the nth photo and the ith photo to the cloud, and the cloud can restore the nth photo based on the differential image and the ith photo, and then refine the nth photo.
  • the image c may be the above-mentioned image a, that is, the image whose complete image is uploaded to the cloud.
  • image c is the previous image of image b, that is, i is equal to n-1.
  • the terminal device 100 uploads the complete image of the first photo to the cloud for fine-tuning; the terminal device 100 uploads the differential image of the second photo and the first photo to the cloud, and the cloud can restore the second photo based on the differential image and the first photo, and then fine-tune the second photo; similarly, the terminal device 100 uploads the differential image of the nth (for example, 3) photo and the n-1th (for example, 2) photo to the cloud, and the cloud can restore the nth photo based on the differential image and the n-1th photo, and then fine-tune the nth photo.
  • the amount of differential image data uploaded on the terminal side is small, which effectively reduces the network traffic and latency for uploading photos; the amount of differential image data uploaded back from the cloud is also small, which effectively reduces the network traffic and latency for uploading photos; in the continuous shooting scenario, users can view the refined images in time, which effectively improves the user experience.
  • the uploaded image c involved in the embodiment of the present application refers to uploading the complete image/differential image of image c to the cloud; similarly, according to the aforementioned scheme, when the differential image corresponding to image c is uploaded to the cloud, the cloud can restore the complete image c based on the differential image and the reference image corresponding to image c.
  • the “fine-tuning” involved in the embodiments of the present application refers to performing image processing on an image using an image processing algorithm/model corresponding to a specific image processing method.
  • the embodiments of the present application do not specifically limit the above-mentioned specific image processing method, which will be introduced in detail in subsequent embodiments and will not be elaborated here.
  • the following introduces the application scenarios involved in the embodiments of the present application, and exemplifies the continuous shooting scene and the fine-tuning mode in combination with the application scenarios.
  • the end-cloud collaborative image processing method provided in the embodiment of the present application can be applied to two types of application scenarios.
  • the image to be processed is an image captured in real time by the camera in the cloud shooting mode, and the terminal device 100 automatically uploads the image to the cloud for fine-tuning;
  • the first type of application scenario includes the following application scenarios 1 to 3.
  • the image to be processed is a non-real-time image stored locally, and the user manually triggers the terminal device 100 to upload the image to the cloud for fine-tuning;
  • the second type of application scenario includes the following application scenarios 4 and 5.
  • the above-mentioned locally stored images can be photos taken previously by the terminal device 100, or images obtained by the terminal device 100 through other means, such as images sent by other devices, images obtained by screenshots, images downloaded from the web page, etc., which are not specifically limited here.
  • Application scenario 1 Take multiple photos and retouch them using the continuous shooting function in cloud camera mode
  • APP1 can automatically turn on the cloud photo mode.
  • the terminal device 100 installs APP1 the user needs to manually turn on the cloud photo mode.
  • Figures 3A to 3B show an example of manually turning on the cloud photo mode.
  • FIG3A is a main interface 11 for displaying an installed application (application, APP); the terminal device 100 is installed with a camera APP, and the main interface 11 can display the application icon 101 of the camera APP. As shown in FIG3A and FIG3B, after detecting that the user clicks the application icon 101, the terminal device 100 displays the shooting interface 12 of the camera APP.
  • application application
  • FIG3B after detecting that the user clicks the application icon 101, the terminal device 100 displays the shooting interface 12 of the camera APP.
  • the shooting interface 12 includes a preview box 102 , a shooting mode bar 103 , a camera switching control 104 , a shooting control 105 , an album control 106 and a cloud photo taking control 107 .
  • the preview box 102 is used to display the preview image captured by the camera of the terminal device 100 in a specific shooting mode.
  • the shooting mode bar 103 is used to select the above shooting mode.
  • the camera switching control 104 is used to switch the camera currently capturing the image.
  • the shooting control 105 can receive the user's input operation (such as a touch operation). In response to the above input operation, the camera APP can shoot and save the image based on the current shooting mode.
  • the album control 106 is used to view the photos that have been taken.
  • the cloud photo control 107 is used to turn on/off the cloud photo mode.
  • the cloud photo control 107 includes two states: on and off. The user can control the cloud photo control 107 to switch between the two states.
  • the cloud photo control 107 shown in FIG3B is in the off state; as shown in FIG3B and FIG3C, an input operation (such as a click operation) acting on the cloud photo control 107 is detected.
  • the terminal device 100 turns on the cloud photo mode and switches the cloud photo control 107 to the on state.
  • the taken photos are uploaded to the cloud for refinement; if the terminal device 100 has no network, the taken photos are refined locally.
  • the embodiment of the present application does not specifically limit the method of enabling the cloud photo mode.
  • the cloud photo mode can also be enabled through the application settings of the camera APP.
  • the shooting mode bar 103 may include: night scene mode, large aperture mode, professional mode, photo mode 103A, video mode, portrait mode 103B, and the like.
  • the shooting mode currently used for shooting shown in FIG3B is the photo mode, and the user can switch shooting modes.
  • the shooting mode bar 103 also includes more options 103C. When the user clicks on more options, the terminal device 100 may be triggered to display more optional shooting modes, such as telephoto mode, landscape mode, panoramic mode, face special effects mode, and the like.
  • the portrait mode 103B is used to refine the characters in the photograph; the large aperture mode is used to blur the background of the target subject in the image and improve the clarity of the target subject; the landscape mode is used to refine the scenery in the photograph; and the face special effects mode is used to add special effects to the face in the photograph.
  • the shooting interface 12 also includes a filter control 108; after detecting an input operation (such as a click operation) acting on the filter control 108, the camera APP displays a variety of filter options (such as a landscape filter 109), and one filter option displays a filter effect diagram and a filter name.
  • an input operation such as a click operation
  • the camera APP displays a variety of filter options (such as a landscape filter 109), and one filter option displays a filter effect diagram and a filter name.
  • the camera APP can add a landscape filter to the image captured by the camera.
  • the camera APP can perform fine-tuning on the image taken by the camera according to the image retouching parameters used when taking pictures.
  • the above-mentioned image retouching parameters indicate one or more image processing methods selected by the user, such as the shooting mode, filter option, beauty option, makeup option, body option and face special effect option. It is understood that one shooting mode can correspond to one image processing method, and one filter option can also correspond to one image processing method.
  • the camera APP of the terminal device 100 is provided with a button or operation supporting a continuous shooting function, and the similarity of the photos taken by the continuous shooting function is usually very high; in the cloud shooting mode, when the continuous shooting function is started, the terminal device 100 can take multiple photos continuously and automatically upload the above multiple photos to the cloud for fine-tuning.
  • the terminal device 100 detects an input operation of the user to start the continuous shooting function (for example, long pressing the shooting control 105); in response to the above input operation, the terminal device 100 starts to continuously shoot photos and displays the number mark 110 of the continuously shot photos.
  • the number mark 110 indicates that 1 photo has been continuously shot; as shown in FIG. 3G , the number mark 110 indicates that 2 photos have been continuously shot.
  • the terminal device 100 detects an input operation of the user to stop the continuous shooting (for example, stop long pressing the shooting control 105); in response to the above input operation, the terminal device 100 stops continuously shooting photos, and saves the refined images of the 6 photos of the continuous shooting that are collaboratively refined in the cloud, and displays the thumbnail of the cover image of the photos of this continuous shooting in the album control 106.
  • the cover image can be the first photo of the continuous shooting, or it can be the best image of the photos of this continuous shooting determined by the terminal device 100 using a preset selection model.
  • the terminal device 100 displays the image display interface 13.
  • the image display interface 13 includes an image display frame 201 and a continuous shooting viewing control 202.
  • the image display frame 201 currently displays the cover image of the continuous shooting photos after cloud-based collaborative refinement, such as the refined image of the first image of the continuous shooting.
  • the terminal device 100 displays the user interface 14, and the user interface 14 includes thumbnails 203 of each image of the continuous shooting, such as a thumbnail 203A of the second image.
  • the terminal device 100 displays the second image after cloud-based collaborative refinement in the image display frame 201.
  • the terminal device 100 determines that the first photo a under a continuous shooting function is the first image under the continuous shooting scene, and uploads the first photo to the cloud for fine-tuning; the terminal device 100 determines that a non-first photo (such as photo b) under a continuous shooting function is a non-first image under the continuous shooting scene, and uploads the difference image between photo b and photo c to the cloud for fine-tuning photo b.
  • photo c is the previous photo of photo b under this continuous shooting function.
  • photo c is any photo taken before photo b under this continuous shooting function (such as the first photo a).
  • the camera APP of the terminal device 100 may provide a variety of image processing methods (e.g., a variety of shooting modes, a variety of makeup options, a variety of filter options, a variety of beauty options, a variety of lighting effect options, a variety of face special effects options, etc.) for the user to choose from in the shooting interface 12; the cloud stores the image processing algorithms/models corresponding to each image processing method.
  • the camera APP of the terminal device 100 takes a photo and uploads the photo to the cloud, it also uploads the image retouching parameters corresponding to the photo.
  • the image retouching parameters are used to indicate one or more image processing methods used when taking the photo, as well as the required image processing parameters; the cloud stores the image processing algorithms/models corresponding to the image processing methods indicated by the above-mentioned image retouching parameters.
  • the model refines the photos to meet the user's shooting needs on the terminal side.
  • the embodiments of the present application do not specifically limit the image processing method provided by the camera APP of the terminal device 100, the process of the user selecting the image processing method, and the image processing algorithm/model corresponding to the image processing method.
  • the user selects the portrait mode as the shooting mode; the photo editing parameters corresponding to the taken photo indicate the above-mentioned portrait mode; the cloud can refine the photo according to the image processing module algorithm corresponding to the portrait mode.
  • the user selects the shooting mode as the photo mode, and selects the landscape filter in the photo mode through the filter control 108; the photo editing parameters corresponding to the taken photo indicate the landscape filter in the photo mode; the cloud can refine the photo according to the image processing algorithm corresponding to the landscape filter in the photo mode.
  • the user selects portrait mode as the shooting mode, and adjusts the image processing parameters in portrait mode (such as skin smoothing parameters, face slimming parameters, etc.);
  • the photo editing parameters corresponding to the taken photos indicate the above-mentioned portrait mode and the image processing parameters in the above-mentioned portrait mode;
  • the cloud performs fine-tuning on the photos according to the image processing algorithm corresponding to the portrait mode and the above-mentioned image processing parameters.
  • Application scenario 2 Take a single photo and retouch it using the non-continuous shooting function in cloud camera mode
  • the terminal device 100 takes a single photo through the shooting control 105 of the shooting interface 12, and each photo taken is automatically uploaded to the cloud for fine-tuning.
  • the user controls the terminal device 100 to take photos c and b successively, and the image processing methods (such as shooting mode, filter options) selected for taking photos c and photo b can be the same or different.
  • the user's input operation such as a click operation acting on the portrait mode 103B is detected, and the terminal device 100 switches the shooting mode to the portrait mode.
  • the camera APP takes photo c, obtains and saves the refined image of the photo c collaboratively refined in the cloud.
  • the terminal device 100 takes photo b, obtains and saves the refined image of the photo b collaboratively refined in the cloud.
  • the user clicks the album control 106 and the terminal device 100 displays the refined image of photo b on the image display interface 13; as shown in Figures 4E and 4F, the user slides photo b to the left to view the refined image of the previously taken photo c.
  • the terminal device 100 when taking photo b, if the terminal device 100 confirms that photo b satisfies at least one of the following conditions: "the shooting time interval between photo b and photo c is less than a time threshold (for example, 0.5s); the similarity between photo b and photo c is greater than a similarity threshold (for example, 80%)", then photo b is determined to be a non-first image in a continuous shooting scenario, and the difference image between photo b and photo c is uploaded to the cloud for fine-tuning photo b; otherwise, photo b is determined to be an image in a non-continuous shooting scenario, and the complete image of photo b is uploaded to the cloud for fine-tuning.
  • a time threshold for example, 0.5s
  • a similarity threshold for example, 80%
  • photo b when taking photo b, if the terminal device 100 confirms that photo b satisfies "the shooting time interval between photo b and photo c is less than the time threshold, and the similarity between photo b and photo c is greater than the similarity threshold", then photo b is determined to be a non-first image in the continuous shooting scenario.
  • photo c is the photo that was taken by the terminal device 100 before photo b and was recently uploaded to the cloud.
  • photo c can be any photo that was taken by the terminal device 100 before photo b and was uploaded to the cloud within a preset time (e.g., 1 minute).
  • the terminal device 100 traverses the similarities between the photos uploaded to the cloud and photo b according to the shooting time from recent to far, and photo c is the first photo whose similarity is greater than the similarity threshold.
  • image processing method and image retouching parameters of application scenario 2 can refer to the relevant description of application scenario 1, which will not be repeated here.
  • the terminal device 100 refines the preview image captured by the camera on the terminal side, and displays the preview image refined on the terminal side in the preview frame 102; uploads the photo taken in response to the user's shooting operation to the cloud for refinement, and displays the cloud-refined photo on the image display interface 13.
  • the terminal side adopts an image processing algorithm/model with a lower required load than the cloud side; the photo is refined using the same image processing method, and the image quality of the photo refined on the cloud side is higher than that of the photo refined on the terminal side.
  • the preview frame 102 of Figure 3D shows a preview image without adding a landscape filter
  • the preview frame 102 of Figure 3E shows a locally refined preview image after adding a landscape filter, and the image quality of the preview image is higher than that of the unrefined preview image
  • the image display frame 201 of Figure 3J shows a photo taken by the camera under the landscape filter refined on the cloud side, and the image quality of the photo is higher than that of the locally refined preview image.
  • Application scenario three Refine each preview image captured by the camera in cloud photography mode
  • the terminal device 100 can also upload the preview image captured by the camera to the cloud for refinement in real time, and display the preview image after collaborative refinement in the cloud in the preview frame 102, so that the user can preview the cloud-refined effect of the image through the preview frame 102 of the shooting interface 12.
  • the refinement of the preview image has a higher requirement for the image processing delay.
  • the image processing delay is guaranteed to be within a lower delay range, the user can preview the refined image in time through the preview frame 102.
  • the terminal device 100 confirms that the first preview image a captured by the camera after starting the shooting interface 12 is a continuous shooting scene.
  • the first preview image captured by the camera is uploaded to the cloud for fine-tuning; a preview image b after the first preview image captured by the camera is determined to be a non-first image in the continuous shooting scene, and a differential image between the preview image b and the preview image c is uploaded to the cloud for fine-tuning the preview image b.
  • preview image c is the previous preview image of preview image b after the current start-up of the shooting interface 12. In one implementation, preview image c is any preview image captured before preview image b after the current start-up of the shooting interface 12, such as the first preview image a.
  • the corresponding retouching parameters of the preview image are also uploaded.
  • the retouching parameters indicate the image processing method used by the camera APP when capturing the preview image.
  • the above-mentioned image processing method and retouching parameters can refer to the relevant description of application scenario one, which will not be repeated here. It should be noted that when capturing preview image a, preview image c and preview image c, the image processing method (such as shooting mode, filter option) adopted by the camera APP can be the same or different.
  • Figures 3A to 4F only illustrate the relevant user interfaces of the camera APP on the terminal device 100, and should not constitute a limitation on the embodiments of the present application.
  • the above interface may include more or fewer controls.
  • the above APP1 may be a system camera application of the terminal device 100, or other system applications or third-party applications with a shooting function, such as an instant messaging application with a shooting function, which is not specifically limited here.
  • APP2 e.g., an album APP
  • the terminal device 100 Upon detecting a user's input operation for one-click fine-tuning of multiple images, the terminal device 100 uploads the multiple images to the cloud for fine-tuning.
  • FIG. 5A to FIG. 5H show a related user interface for one-click refinement of multiple images.
  • FIG5A shows an image display interface 15 of the photo album APP, and the image display interface 15 displays multiple images stored locally.
  • the terminal device 100 displays a selection control on each image, and the selection control 301 of image d is selected, and the selection controls of other images are unselected; by clicking the selection control, the control can be switched between the selected state and the unselected state.
  • a menu bar 302 is also displayed, and the menu bar 302 displays a sharing control, a collection control, a deletion control, a creation control 302A and more controls.
  • the terminal device 100 detects that the user clicks the creation control 302A; in response to the click operation, the terminal device 100 displays the creation menu bar 303, and the creation menu bar 303 includes a one-key editing control 303A.
  • the terminal device 100 displays an editing bar 304 in the user interface 16.
  • the editing controls in the editing bar 304 are used to perform specific image processing on the image, such as a beauty control, a filter control, a high-definition repair control 304A, a character special effect control, etc.
  • the beauty control is used to trigger the terminal device 100 to display a variety of beauty options for adding beauty effects to the characters in the image
  • the filter control is used to trigger the terminal to display a variety of filter options for adding filter effects to the image
  • the high-definition repair control 304A is used to repair the image content to improve the content clarity and visual effects
  • the character special effect control is used to trigger the terminal device 100 to display a variety of character special effect options for adding character special effects to the characters in the image.
  • the terminal device 100 uploads the multiple images selected by the user to the cloud for fine-tuning, and displays the refined images of the above multiple images collaboratively refined on the cloud, and also displays a confirmation control 306 and a cancel control 305.
  • the confirmation control 306 is used to confirm the high-definition repair of the image
  • the cancel control 305 is used to cancel the high-definition repair of the image.
  • the attribute parameters of the images in the album include file time, the multiple images are sorted according to the file time, and the terminal device 100 sequentially determines whether each image is the first image of the continuous shooting scene. In some embodiments, the multiple images are sorted according to the order in which the images are selected by the user, and each image is sequentially determined whether it is the first image of the continuous shooting scene.
  • the first image of the above-mentioned multiple images is used as the first image of the continuous shooting scene; for the non-first image among the above-mentioned multiple images (for example, image b), if the terminal device 100 confirms that image b satisfies any of the following conditions: "the similarity between image b and image c is greater than the similarity threshold, and the shooting time interval between image b and image c is less than the time threshold", then image b is determined to be the non-first image in the continuous shooting scene, and the difference image between image b and image c is uploaded to the cloud for fine-tuning image b; otherwise, image b is determined to be an image in a non-continuous shooting scene, and image b is uploaded to the cloud for fine-tuning.
  • the terminal device 100 confirms that image b satisfies "the similarity between image b and image c is greater than the similarity threshold, and the shooting time interval between image b and image c is less than the time threshold", it is determined that image b is not the first image in the continuous shooting scenario.
  • the image c is the previous image of the image b in the multiple images.
  • Image c is any image before image b among the multiple images mentioned above.
  • the attribute parameters of the locally stored image b may indicate the source of the image b.
  • the image name of the image b obtained by screenshot includes "screenshot”
  • the image name of the taken photo includes "IMG”. It can be understood that if the image b is a photo taken by a camera, the file time of the image b may include the shooting time of the image b.
  • Application scenario 5 Refine a local single image
  • APP2 (such as an album APP) of the terminal device 100 provides a function of refining a single image.
  • the terminal device 100 uploads image b to the cloud for refining.
  • the terminal device 100 performs fine-tuning on image c and image b successively;
  • Figures 6A to 6D show a user interface related to fine-tuning a single image.
  • the image processing methods of image c and image b can be the same or different.
  • the image display interface 15 of the photo album APP displays the locally stored images c and b. After the user triggers the terminal device 100 to perform fine-tuning on image c, the image display interface 15 displays the fine-tuned image of image c.
  • the terminal device 100 displays the image display interface 13 of image b, and the image display interface 13 displays image b and the editing control 401; as shown in FIG6B and FIG6C , after detecting that the user clicks on the editing control 401, the terminal device 100 displays the editing bar 402 on the image editing interface 17, and the editing controls in the editing bar 402 are used to process image b, such as beauty controls, filter controls, high-definition portrait controls 402A, character special effects controls, etc.; the high-definition portrait control 402A is used to repair the characters in the image to improve the clarity and visual effects of the characters.
  • the terminal device 100 uploads image b to the cloud for refinement, and displays the refined image of image b after refinement, and also displays a confirmation control 403 and a cancel control 404;
  • the confirmation control 403 is used to confirm the repair of the image, and after confirming the repair, the above-mentioned refined image can be saved to the album;
  • the cancel control 404 is used to cancel the repair of the image.
  • image c is the last image uploaded to the cloud for refinement.
  • image c is any image that has been uploaded to the cloud for refinement.
  • image c is any image that has been uploaded to the cloud for refinement within a preset time period (for example, 10 minutes).
  • the photo album APP provides a variety of local image processing methods (such as multiple filter options, multiple beauty options, multiple makeup options, multiple character special effects options, high-definition restoration options, etc.) for users to choose from, and the cloud stores the image processing algorithms/models corresponding to each image processing method.
  • the photo album APP of the terminal device 100 uploads the local image to the cloud for fine-tuning, it also uploads the image retouching parameters corresponding to the local image, and the image retouching parameters indicate the above-mentioned image processing method.
  • the image retouching parameters and image processing methods can refer to the relevant description of application scenario one.
  • Figures 5A to 6D only illustrate the user interface of the album APP on the terminal device 100, and should not constitute a limitation on the embodiments of the present application.
  • the above interface may include more or fewer controls.
  • the above APP2 may be a system album application of the terminal device 100, or other system applications or third-party applications that have the function of refining local images.
  • the end-cloud collaborative image processing method provided in the embodiment of the present application can also be applied to other application scenarios that require uploading images to the cloud, and no specific limitation is made here.
  • the refinement mode can be divided into full-image refinement and subject refinement; in the full-image refinement mode, the image refinement area includes the entire area of the image; in the subject refinement mode, the image refinement area includes the display area of the target subject in the image, and the size of the refinement area is smaller than the image size.
  • the end side Before uploading the image to the cloud, the end side can identify the refinement mode of the image to be processed; in the full-image refinement mode, the complete image/the differential image of the complete image of the image to be processed is uploaded to the cloud; in the subject refinement mode, the target subject in the image to be processed is identified, and the end side cuts out the target subject image from the image to be processed, and only sends the target subject image/the differential image of the target subject image to the cloud; the cloud returns the differential image of the target subject image before and after the refinement to the end side; based on the differential image and the target subject image, the end side can restore the refined image of the target subject image; by replacing the target subject image in the image to be processed with the refined image of the target subject image, the refined image of the image to be processed can be obtained.
  • the amount of data for uploading and returning images is further reduced, thereby reducing the required network traffic and latency for uploading and returning images, effectively improving the user experience.
  • the terminal side pre-stores a correspondence between the image processing method and the fine-tuning mode.
  • the image processing method of the image to be processed may include one or more of the following: shooting mode, filter, character special effects, high-definition restoration, beauty, body beauty, high-definition portrait, etc.; the fine-tuning modes corresponding to different image processing methods may be the same or different.
  • the shooting mode may include one or more of a portrait mode, a large aperture mode, a night mode, a landscape mode, a professional mode, etc.
  • Portrait mode and large aperture mode correspond to the subject refinement mode, night mode, landscape mode, and professional mode correspond to the full-image refinement mode;
  • various filter options correspond to the full-image refinement mode;
  • various beauty options correspond to the full-image refinement mode;
  • various character special effects options correspond to the subject refinement mode;
  • high-definition restoration corresponds to the full-image refinement mode; high-definition portrait corresponds to the subject refinement mode.
  • application developers can set the correspondence between image processing methods and refinement modes according to actual needs, and no specific limitation is made here.
  • the full-image refinement mode is used for the image.
  • the image processing methods for the image to be processed include portrait mode and filter 1, the portrait mode corresponds to the subject refinement mode, and filter 1 corresponds to the full-image refinement mode.
  • the refinement mode of the image is determined to be the full-image refinement mode, and there is no need to cut the target subject image.
  • the user selects the landscape filter in the photo mode to take pictures, and the photos taken under the landscape filter adopt the full-image refinement mode; by comparing the preview image before selecting the landscape filter shown in Figure 3C, and the refined images of the photos taken under the landscape filter shown in Figures 3J to 3L, it can be seen that the photos adopt the full-image refinement mode.
  • the terminal device 100 takes a photo in portrait mode, and the target subject of the photo is a person; by comparing the preview image before selecting the portrait mode shown in Figure 4A, and the refined image of the photo taken in portrait mode shown in Figure 4E, it can be seen that the photo uses the subject refinement mode and only the person in the photo is refined.
  • a correspondence between subject types and refinement modes is pre-stored, and subject types may include landscapes, people, animals, buildings, cities, etc.
  • the terminal side can intelligently identify the subject type of the target subject in the image to be processed.
  • the refinement mode corresponding to landscapes, buildings, and cities is the overall refinement mode; the refinement mode corresponding to people and animals is the subject refinement mode.
  • the terminal device 100 determines that the refinement mode of the image to be processed is the subject refinement mode; otherwise, it is the full image refinement mode.
  • the image within the refined area of the image to be processed can be referred to as a region map; in the full image refinement mode, the region map of the image to be processed is the image to be processed; in the subject refinement mode, the region map of the image to be processed is the target subject image cut out from the image to be processed.
  • the terminal device 100 sends a complete image of the region map of image a to the cloud; receives the differential image of the region map before and after refinement sent by the server 200, and restores the refined image of image a based on the differential image.
  • the terminal device 100 sends a differential image of the region map of image b and the region map of the reference image (i.e., image c) to the cloud, and the differential image is used to restore the region map of image b; receives the differential image of the region map before and after refinement sent by the cloud, and restores the refined image of image b based on the differential image.
  • image c region map of the reference image
  • image c is the reference image for differential with image b
  • image c is an image that has been uploaded to the cloud, or an image uploaded to the cloud together with image b, for example, image c is the aforementioned image a.
  • FIG. 7A shows a flowchart of a method for image processing in a device-cloud collaboration, which includes steps S101 to S124.
  • the method includes the following three stages: a device-side upload processing stage, a cloud-side processing stage, and a device-side backhaul processing stage.
  • the terminal device 100 obtains the current image to be processed.
  • the current image to be processed may be image a, image b or image c of any of the aforementioned application scenarios.
  • the image to be processed may be a photo taken by the camera APP of the terminal device 100; when the user's shooting operation is detected, the camera APP obtains the image to be processed taken by the camera.
  • the image to be processed may be a preview image captured by the camera APP of the terminal device 100; after detecting the operation of the shooting interface 12 for displaying the preview image, the camera APP obtains the image to be processed captured by the camera.
  • the image to be processed is an image stored locally in the terminal device 100; when a user operation that triggers the refinement of the local image is detected, the album APP uses the image as the image to be processed.
  • the terminal device 100 identifies whether the refinement mode of the image to be processed is the subject refinement mode; if not, execute S103; if so, execute S104.
  • the terminal device 100 determines that the refinement mode of the image to be processed is the full-image refinement mode, and the region 1 of the image to be processed is the image to be processed itself.
  • the terminal device 100 determines that the region image 1 of the image to be processed is the target main image in the image to be processed, cuts the region image 1 in the image to be processed, and records the position of the region image 1 in the image to be processed.
  • the terminal device 100 identifies a target subject in the image to be processed, determines that the refined area includes the display subject, and that the refined area is greater than or equal to the display area of the target subject.
  • the embodiment of the present application does not specifically limit the implementation method of indicating the position of the area diagram 1 in the image to be processed.
  • the region image 1 (i.e., the target subject image) is rectangular.
  • the upper left corner coordinates and lower right corner coordinates of the region image 1 in the image to be processed indicate the position of the region image 1 in the image to be processed; or, the upper left corner coordinates of the region image 1 in the image to be processed, as well as the height and width of the region image 1, indicate the position of the region image 1 in the image to be processed.
  • the edge of the region image 1 (i.e., the target subject image) is composed of the edge line of the target subject in the image to be processed, that is, the shape of the region image 1 is the shape of the target subject.
  • the coordinates of each pixel point on the edge line of the target subject indicate the position of the region image 1 in the image to be processed.
  • steps S102 to S104 do not need to be performed, that is, there is no need to determine the refinement mode, and for any image to be processed, the complete image/the differential image of the complete image is uploaded to the cloud.
  • the terminal device 100 identifies whether the image to be processed is a non-first image in a continuous shooting scenario; if not, execute S106; if so, execute S107.
  • the image to be processed when the image to be processed satisfies any one of the following conditions 1 to 3, the image to be processed is a non-first image in the continuous shooting scene; otherwise, it is not a non-first image in the continuous shooting scene.
  • Condition 1 The image to be processed is not the first image taken by the continuous shooting function.
  • Condition 2 The shooting time interval between the image to be processed and image c is less than a time threshold (eg, 0.5 s).
  • Condition 3 The similarity between the image to be processed and image c is greater than a similarity threshold (eg, 80%).
  • condition 3 specifically includes: the similarity between the region map of the image to be processed (ie, the target subject map) and the region map of image c is greater than a similarity threshold.
  • image c and the image to be processed use the same fine-tuning mode;
  • image c is an image that has been uploaded to the server 200 before the image to be processed, or an image uploaded to the server 200 at the same time as the image to be processed.
  • Image c can be a non-first image in a continuous shooting scene (such as image a), and uploading image c to the server 200 means uploading the differential image corresponding to the area map of image c to the server 200, and the cloud can restore the complete image of the area map of image c based on the differential image.
  • Image c may also not be a non-first image in a continuous shooting scene, and uploading image c to the server 200 means uploading the complete image of the area map of image c to the server 200.
  • the image to be processed is the aforementioned image a, image a does not satisfy any of the aforementioned conditions, and image a is not the non-first image in the continuous shooting scene.
  • the image to be processed is the aforementioned image b, image b satisfies at least one of the aforementioned conditions, and image b is not the non-first image in the continuous shooting scene.
  • image c is the last image uploaded to the server 200.
  • image c can be any image uploaded to the server 200 within a preset time (e.g., 10 minutes); specifically, images uploaded to the server 200 within the preset time are traversed from the latest to the oldest upload time to determine whether there is an image c that meets condition 3.
  • image c can be any image uploaded to the server 200 at the same time as the image to be processed; specifically, images uploaded to the server 200 at the same time are traversed to determine whether there is an image c that meets condition 3.
  • image c is the previous image to be processed of the current image to be processed.
  • image c may be the previous photo taken by the camera; in the aforementioned application scenario 3, image c may be the previous preview image captured by the camera; in the aforementioned application scenarios 4 and 5, image c is the previous image to be processed according to the file time or the time when the user selected the image; in the aforementioned application scenario 5, image c may be the previous image uploaded to the cloud.
  • step S105 may specifically include steps D1 to D3 .
  • D3. Determine whether the similarity between the image to be processed and the previous image to be processed is greater than a similarity threshold; if so, the image to be processed is a non-first image in the continuous shooting scene; otherwise, determine that the image to be processed is not a non-first image in the continuous shooting scene.
  • step S105 may be executed first to determine the non-first image of the continuous shooting scene, and then step S102 may be executed to identify the fine-tuning mode.
  • the image to be processed is not a non-first image in the continuous shooting scene.
  • the terminal device 100 uploads a request message to the server 200.
  • the request message includes area map 1, the image ID of area map 1 and the image type of area map 1.
  • the image type indicates that area map 1 is a non-differential image.
  • step S106 in the full image refinement mode, the terminal device 100 uploads the image to be processed to the server 200 ; in the subject refinement mode, the terminal device 100 uploads the target subject image in the image to be processed to the server 200 .
  • the terminal device 100 obtains a differential image 1 between the region image 1 of the image to be processed and the reference image.
  • Differential image 1 source image (i.e., region image 1 of the image to be processed) - reference image.
  • the terminal device 100 selects one image from the images uploaded to the server 200 or the images uploaded to the cloud together as the reference image of the image to be processed according to a preset rule.
  • the reference image of the image to be processed and the image to be processed are usually similar. Among them, the refinement mode used by the image to be processed and the reference image of the image to be processed is usually the same.
  • area image 1 is the image to be processed; if the image to be processed meets the aforementioned condition 1, the image to be processed is a photo taken by the continuous shooting function, and the reference image of the image to be processed can be the previous photo of the continuous shot image to be processed, the first photo of the continuous shot, or any photo before the continuous shot image to be processed; the previous photo of the continuous shot image to be processed is used as an example for illustrative description. If the image to be processed meets the aforementioned conditions 2 and/or 3, the reference image of the image to be processed is the aforementioned image c; for example, in application scenario 2, image c is the previous photo taken by the camera.
  • the reference image of the region map 1 (ie, the target subject map) of the image to be processed is the region map (ie, the target subject map) of the reference image of the image to be processed.
  • the terminal device 100 determines whether the data volume of the differential image 1 is smaller than the data volume of the regional image 1; if so, execute S109; otherwise, execute S106.
  • Step S108 is optional. In some embodiments, step S109 is directly executed after step S107 without executing S108.
  • the terminal device 100 uploads a request message to the server 200 , including the differential image 1 , the image ID of the differential image 1 , the source image ID, the reference image ID and the image type, where the image type indicates that the differential image 1 is a differential image.
  • the source image corresponding to the difference image 1 is the region image 1 of the image to be processed, the source image ID is the image ID of the source image, and the reference image ID is the image ID of the reference image.
  • the server 200 receives a request message.
  • the server 200 determines whether the image type in the request message is a differential image; if so, execute S112; otherwise, execute S113.
  • the server 200 restores the region image 1 of the image to be processed according to the reference image ID in the request message and the differential image 1; the server 200 determines that the image ID of the restored region image 1 is the source image ID in the request message.
  • the differential image carried by the request message needs to be restored to a region map of the image to be processed; if the image type is a non-differential image, it indicates that the request message carries the region map of the image to be processed, and no differential restoration is required.
  • the request message when the image type is a differential image, the request message also carries a differential algorithm identifier, and the terminal device 100 uses the differential algorithm indicated by the differential algorithm identifier to obtain the differential image 1 of the two images.
  • the terminal device 100 and the server 200 pre-negotiate to use a specific differential algorithm, and the request message does not need to carry a differential algorithm identifier.
  • the difference algorithm can be an absolute value difference method, an inter-frame difference method (absDiff), a relative difference method, a threshold difference method, a normalized difference method, or a Gaussian difference method, etc.
  • the server 200 records the region map 1 and the image ID of the region map 1 , and obtains a refined image of the region map 1 of the image to be processed.
  • the request message carries the image retouching parameters of the image to be processed, and the server 200 performs fine-tuning on the region 1 of the image to be processed according to the image retouching parameters to obtain the refined image of the region 1.
  • the above-mentioned image retouching parameters can refer to the relevant description of the aforementioned application scenario, which will not be repeated here.
  • the server 200 uses the region image 1 of the image to be processed as the reference image, and obtains a difference image 2 before and after the refinement of the region image 1.
  • the difference image 2 the source image (ie, the refined image of the region image 1) - the reference image (ie, the region image 1).
  • the server 200 determines whether the data volume of the differential image 2 is smaller than the data volume of the refined image of the regional image 1; if so, execute S116; otherwise, execute S117.
  • Step S115 is optional. In some embodiments, step S116 is directly executed after step S114 without executing S115.
  • the server 200 transmits a response message to the terminal device 100.
  • the response message includes the differential image 2, the image ID of the differential image 2, The reference image ID and the image type, the image type indicates that the difference image 2 is a difference image.
  • the server 200 transmits a response message back to the terminal device 100.
  • the response message includes the refined image of the area image 1, the image ID and the image type of the area image 1.
  • the image type indicates that the refined image is a non-differential image.
  • the differential image 2 is transmitted back to the terminal device 100; if the differential image 2 is greater than or equal to the data size of the refined image, the refined image is directly transmitted back.
  • the terminal device 100 receives a response message.
  • the terminal device 100 determines whether the image type in the response message is a differential image; if so, execute S120; otherwise, execute S121.
  • the image type is a differential image
  • the differential image carried in the response message needs to be restored to a refined image
  • the reference image ID in the response message indicates the image ID of the region image 1 of the image to be processed.
  • the image type is a non-differential image, it indicates that the response message carries a refined image and no differential restoration is required; the image ID in the response message indicates the image ID of the region image 1 of the image to be processed.
  • the response message may also carry a differential algorithm identifier, and the terminal device 100 performs differential restoration according to the restoration algorithm corresponding to the differential algorithm indicated by the differential algorithm identifier.
  • the terminal device 100 determines whether the image to be processed corresponding to the area diagram 1 adopts the subject refinement mode; if not, execute S122; if yes, execute S123.
  • the response message indicates the image ID of the area image 1
  • the terminal device 100 can query the refinement mode of the image to be processed according to the image ID of the area image 1. It can be understood that in the full image refinement mode, the image ID of the area image 1 is the image ID of the image to be processed; in the subject refinement mode, the image ID of the area image 1 is the image ID of the target subject image of the image to be processed.
  • the terminal device 100 records the correspondence between the image ID of the image to be processed and the image ID of the target subject image; in step S121, the image to be processed and the refinement mode of the image to be processed can be queried based on the correspondence.
  • the terminal device 100 records the image ID of the region map of the image to be processed in the subject refinement mode in file 1. If file 1 includes the image ID of region map 1, the refinement mode of the image to be processed corresponding to region map 1 is subject refinement. In one implementation, the terminal device 100 records the image ID of the image to be processed in the full image refinement mode in file 2. If file 2 does not include the image ID of region map 1, the refinement mode of the image to be processed corresponding to region map 1 is subject refinement.
  • the terminal device 100 determines that the refined image of the region image 1 is the refined image of the image to be processed.
  • the terminal device 100 obtains a refined image of the image to be processed according to the position of the region image 1 in the image to be processed and the refined image of the region image 1.
  • the terminal device 100 replaces the target subject map in the image to be processed with the refined map of the target subject map to obtain the refined map of the image to be processed.
  • the region image 1 of the image to be processed is the target subject image after cutting, and the refined image of the region image 1 and the image to be processed need to be spliced to obtain the refined image of the image to be processed.
  • the terminal device 100 displays a refined image of the image to be processed.
  • the terminal device 100 can display the refined image for the user to view.
  • FIG8 shows a device architecture diagram provided by an embodiment of the present application.
  • the terminal device 100 may include an acquisition module, a cutting module, a continuous shooting recognition module, a terminal side difference module and a display module.
  • the server 200 may include a cloud-side difference module and a refinement module.
  • the acquisition module on the terminal side is used to acquire the image to be processed, for example, the image is the aforementioned image a, image b or image c.
  • the cutting module on the terminal side is used to cut out the target main image from the image to be processed in the main refinement mode, and determine the area image 1 of the image to be processed as the target main image. For the image to be processed in the full image refinement mode, the area image 1 of the image to be processed is determined to be the image to be processed itself.
  • the continuous shooting recognition module on the end side is used to identify whether the current image to be processed is a non-first image in the continuous shooting scene. If it is not a non-first image in the continuous shooting scene (such as the aforementioned image a), the device is triggered to send the area map 1 of the image to be processed to the server 200; if it is a non-first image in the continuous shooting scene (such as the aforementioned image b), the end side difference module is instructed to obtain the difference image of the area map 1.
  • the end-side difference module is used to obtain the region map and the reference map of the image for the non-first image in the continuous shooting scene (such as the aforementioned image b).
  • the terminal device 100 is triggered to upload the differential image 1 and the reference image ID to the server 200.
  • the reference image has been uploaded to the cloud, or is uploaded to the cloud at the same time as the differential image 1.
  • the cloud-based differential module is used to restore the region map 1 of the image to be processed according to the received differential image 1 and the reference image ID.
  • the cloud-based refinement module is used to refine the region image 1 of the image to be processed and obtain a refined image of the region image 1.
  • the cloud-based differential module is also used to obtain the differential image 2 before and after the refinement of the above-mentioned regional image 1, triggering the device to send the differential image 2 and the reference image ID (i.e., the image ID of the regional image 1 of the image to be processed) to the terminal device 100.
  • the reference image ID i.e., the image ID of the regional image 1 of the image to be processed
  • the difference module on the terminal side is also used to restore the refined image of the region image 1 of the image to be processed based on the difference image 2 and the region image 1 of the image to be processed.
  • the cutting module on the terminal side is also used to replace the region image 1 in the image to be processed with the refined image of region image 1 for the image to be processed in the main refinement mode, so as to obtain the refined image of the image to be processed. It can be understood that for the image to be processed in the full image refinement mode, the refined image of region image 1 is the refined image of the image to be processed.
  • the display module on the terminal side is used to display the refined image of the image to be processed.
  • the embodiment of the present application does not specifically limit the implementation of the image ID.
  • the image IDs of the images to be processed are numbered 1, 2, 3, etc.
  • the image with the image ID x can be referred to as image x.
  • the image processing flow is exemplified below for the full image refinement mode and the main body refinement mode respectively.
  • FIG9A shows an image processing flow of an image to be processed in the full image refinement mode
  • FIG9B shows a schematic diagram of each image involved in the processing flow, and the processing flow includes steps A1 to A24.
  • the specific implementation of each step can refer to the relevant description of the corresponding step in FIG7A.
  • the images to be processed include image 1 and image 4; image 1 is not the non-first image in the continuous shooting scene; image 4 is the non-first image in the continuous shooting scene.
  • Image 1 and image 4 can be images in any of the aforementioned application scenarios.
  • the acquisition module of the terminal device 100 acquires the image 1 to be processed.
  • Image 1 is the first photo taken by the camera APP of the terminal device 100 using the full-image retouching mode.
  • FIG9B shows a schematic diagram of Image 1.
  • the acquisition module may also acquire image parameters of image 1, including part or all of the image name, retouching mode, file time (e.g., shooting time), image ID, and retouching parameters.
  • image 1 is a photo taken by a continuous shooting function
  • the image parameters also include a continuous shooting identifier, which may indicate that image 1 is a photo taken by the continuous shooting function and whether it is the first photo of the continuous shooting.
  • the continuous shooting identifier may be an independent identifier, or it may be carried in the image name or image ID.
  • the image name of a continuous shooting photo includes a continuous shooting identifier and a continuous shooting number
  • the continuous shooting identifier is "Burst”
  • the continuous shooting number is used to indicate which photo of the continuous shooting the photo is; for example, the image name of the first photo of the continuous shooting includes Burst01, and the image name of the third photo of the continuous shooting includes Burst03.
  • the cutting module of the terminal device 100 determines that there is no need to cut the target main image of the image 1 according to the full image retouching mode adopted by the image 1, and the area image in the retouched area of the image 1 is the image 1 itself.
  • image 1 is a photo taken by the terminal device 100 using a landscape filter, and the refinement mode corresponding to the landscape filter refines the entire image.
  • the continuous shooting recognition module of the terminal device 100 determines that the image 1 is not the non-first image of the continuous shooting scene.
  • the terminal device 100 determines that Image 1 is not a non-first image in the continuous shooting scenario.
  • the terminal-side difference module determines that there is no need to perform difference on image 1, and records image 1 and the image ID of image 1.
  • the embodiment of the present application does not specifically limit the implementation of interaction between modules within the terminal device 100.
  • the acquisition module sends image 1, the image ID of image 1, and the fine-tuning mode of image 1 to the cutting module, and calls the cutting module to execute step A2; the acquisition module sends image 1, the image ID of image 1, and the continuous shooting identification of image 1 to the continuous shooting identification module, and calls the continuous shooting identification module to execute step A3; the acquisition module sends the execution results of steps A2 and A3, as well as image 1 and the image ID of image 1 to the end-side differential module, and calls the end-side differential module to execute step A4.
  • the execution order of steps A2 and A3 is not specifically limited here.
  • the acquisition module sends image 1 and image parameters of image 1 to the cutting module, and calls the cutting module to execute step A2; when there is no need to cut image 1, the cutting module sends image 1 and image parameters of image 1 to the continuous shooting recognition module, and calls the continuous shooting recognition module to execute step A3; the continuous shooting recognition module sends image 1, image ID of image 1 and the execution result of step A3 to the end-side differential module, and calls the end-side differential module to execute step A4.
  • the end-side differential module triggers the terminal device 100 to send a request message 1 to the server 200.
  • the request message 1 includes image 1, image ID of image 1, image type of image 1 and image retouching parameters.
  • the image type indicates that image 1 is a non-differential image.
  • the image 1 adopts the full-image refinement mode, and the image 1 is not the non-first image of the continuous shooting scene; therefore, the terminal device 100 has no It is necessary to perform a difference on the image 1 and directly upload the complete image of the image 1 to the server 200 .
  • the request message 1 may also include an image name.
  • the image name, image type, and image ID carried in the request message 1 are as follows: ⁇ "imageName”:"Image 1", //image name “imagetype”:”srcImage”, //Image type, srcImage indicates a non-differential image "imageId”:"1"//image ID ⁇
  • the cloud-based differential module determines that there is no need to perform differential restoration on the image 1, and records the image 1 and the image ID of the image 1.
  • the refinement module of the server 200 obtains a refined image of the image 1 (ie, image 2).
  • the retouching module of the server 200 performs retouching on the image 1 according to the image processing algorithm indicated by the retouching parameters in the request message 1, and obtains the retouched image 2.
  • the retouching module of the server 200 sends the image 2 to the cloud difference module.
  • the cloud-based differential module obtains the refined image of Image 1 (ie, Image 2) and the differential image between Image 1 (ie, Image 3), where Image 1 is the reference image of the differential image.
  • the cloud-based differential module triggers the server 200 to send a response message 1 to the terminal device 100.
  • the response message 1 includes image 3, the image ID of image 3, the reference image ID, the image type and the differential algorithm identifier.
  • the image type indicates that image 3 is a differential image.
  • the cloud-based difference module determines that the difference image (ie, image 3) has a smaller data volume than the refined image (ie, image 2)
  • the difference image is transmitted back to the terminal device 100 via a response message 1.
  • the image name, image type, differential algorithm identifier, image ID, and reference image ID carried in the response message 1 are as follows:
  • the end-side difference module determines that differential restoration is required according to the differential image indicated by the image type in the response message 1, and restores the image 2 according to the image 3, the image 1 indicated by the reference image ID, and the restoration algorithm indicated by the differential algorithm identifier.
  • image 2 (ie, the refined image of image 1 )
  • image 3 ie, the differential image + image 1 .
  • the cutting module of the terminal device 100 determines that the image 2 is a refined image of the image 1.
  • the cutting module of the terminal device 100 records the image ID of the cut target main image; if the cutting module determines that the recorded image ID of the target main image does not include the reference image ID (i.e., the image ID of image 1), it is determined that image 1 is not the target main image, but a complete image to be processed.
  • the reference image ID i.e., the image ID of image 1
  • the terminal device 100 records the image ID and refinement mode of the image to be processed; if the cutting module queries that the image ID of the image to be processed includes the reference image ID (i.e., the image ID of image 1), it is determined that image 1 is not the target subject image, but a complete image to be processed.
  • the reference image ID i.e., the image ID of image 1
  • the display module of the terminal device 100 displays a refined image of the image 1 (ie, image 2).
  • the step A12 is optional.
  • the embodiment of the present application does not specifically limit the time for the terminal device 100 to display the image 1.
  • the acquisition module of the terminal device 100 acquires the image 4 to be processed.
  • image 4 is the second photo taken by the camera APP of the terminal device 100 using the full-image retouching mode.
  • FIG9B shows a schematic diagram of image 4.
  • the acquisition module may also acquire image parameters of image 4, including part or all of the image name, retouching mode, retouching parameters, image ID, continuous shooting flag, and retouching parameters, etc.
  • image parameters of image 4 including part or all of the image name, retouching mode, retouching parameters, image ID, continuous shooting flag, and retouching parameters, etc.
  • the cutting module of the terminal device 100 determines that the image 4 does not need to be cut according to the full image refinement mode adopted by the image 4.
  • the area map within the repair area is the image 4 itself.
  • the continuous shooting recognition module of the terminal device 100 determines that image 4 is not the first image of the continuous shooting scene.
  • Image 4 is the second photo b taken by the continuous shooting function, and the terminal device 100 determines that Image 4 is not a non-first image in the continuous shooting scene.
  • Image 4 is a photo taken by the non-continuous shooting function, and the terminal device 100 determines that the shooting time interval between the photo and the previous photo (i.e., Image 1) is less than the time threshold, or the similarity with the previous photo is greater than the similarity threshold; therefore, Image 4 is determined to be a non-first image in the continuous shooting scene.
  • the end-side difference module obtains a difference image (ie, image 5) between image 4 and image 1, where image 1 is a reference image of the difference image.
  • the terminal device 100 selects a reference image for difference for image 4, for example, the reference image is the previous photo (ie, image 1).
  • the end-side differential module triggers the terminal device 100 to send a request message 2 to the server 200, including image 5, the image ID of image 5, the reference image ID, the source image ID, the image type and the differential algorithm identifier, and the image type indicates that image 5 is a differential image.
  • request message 1 of step A5 and request message 2 of step A17 may also be uploaded to server 200 at the same time, that is, image 1 (i.e., the first photo) and image 5 (i.e., the differential image between the second photo and the first photo) may be uploaded to server 200 at the same time.
  • image 1 i.e., the first photo
  • image 5 i.e., the differential image between the second photo and the first photo
  • image name, image type, differential algorithm identifier, image ID, reference image ID, and source image ID (i.e., image ID of image 4) carried in request message 2 are as follows:
  • the cloud-based differential module determines that differential restoration is required based on the differential image indicated by the image type in the request message 2, and restores image 4 based on image 5, image 1 indicated by the reference image ID, and the restoration algorithm indicated by the differential algorithm identifier, and records image 4 and the image ID of image 4, where the image ID of image 4 is the above-mentioned source image ID.
  • the retouching module of the server 200 obtains a retouched image of the image 4 (ie, the image 6).
  • the refinement module of the server 200 sends the image 6 to the cloud-based difference module.
  • step A7 and step A19 can be executed simultaneously, that is, image 1 and image 4 are refined simultaneously.
  • the refinement module is provided with multiple containers for refinement, and the terminal device 100 assigns image 1 and image 4 to different containers for refinement, and the refinement task execution modules in the two containers respectively refine the two images in parallel.
  • the cloud-based differential module obtains a refined image of image 4 (ie, image 6) and a differential image of image 4 (ie, image 7).
  • the cloud-based differential module triggers the server 200 to send a response message 2 to the terminal device 100, including image 7, the image ID of image 7, the reference image ID, the image type and the differential algorithm identifier, and the image type indicates that image 7 is a differential image.
  • the cloud-based differential module determines that the differential image (ie, image 7 ) has a smaller data volume than the refined image (ie, image 6 )
  • the differential image is transmitted back to the terminal device 100 via a response message 2 .
  • the image name, image type, differential algorithm identifier, image ID, and reference image ID carried in the response message 2 are as follows:
  • the end-side differential module determines that differential restoration is required based on the differential image indicated by the image type in the response message 1, and performs differential restoration based on image 7.
  • the image 4 indicated by the reference image ID and the image 6 restored by the restoration algorithm indicated by the difference algorithm ID.
  • image 6 ie, the refined image of image 4
  • image 7 ie, the differential image + image 4 .
  • the cutting module of terminal device 100 determines that image 6 is a refined image of image 4.
  • step A11 For details, please refer to the relevant description of step A11, which will not be repeated here.
  • the display module of the terminal device 100 displays a refined image of the image 4 (ie, image 6).
  • This method identifies the user's continuous shooting scenes and uploads and returns photo data in a differential manner, which greatly reduces the amount of data transmitted in the network and minimizes the overall time consumption.
  • the acquisition module on the end side also acquires the third photo taken by the camera APP in the full-image retouching mode.
  • the cutting module on the end side determines that there is no need to cut the target subject according to the full-image retouching mode adopted by the photo.
  • the differential module on the end side acquires the differential image corresponding to the third photo, and triggers the device to send the differential image to the server 200.
  • the differential image can be a differential image between the third photo and the second photo, or a differential image between the third photo and the first photo, that is, the reference image of the differential image can be the second photo or the first photo.
  • the third photo can be restored according to the above-mentioned differential image and the reference image.
  • the subsequent cloud-side retouching and end-side backhaul processing of the third photo can refer to the relevant description of the second photo (i.e., image 4) in Figures 9A and 9B, which will not be repeated here.
  • FIG10A shows an image processing flow of an image to be processed in a subject refinement mode
  • FIG10B shows a schematic diagram of each image involved in the processing flow, and the processing flow includes steps A1 to A24.
  • the specific implementation of each step can refer to the relevant description of the corresponding step in FIG7A.
  • the images to be processed include image 11 and image 16; image 11 is not a non-first image in a continuous shooting scene; image 16 is a non-first image in a continuous shooting scene.
  • Image 11 and image 16 can be images in any of the aforementioned application scenarios.
  • the acquisition module of the terminal device 100 acquires the image 11 to be processed.
  • image 11 is the first photo taken by the camera APP of the terminal device 100 using the subject refinement mode.
  • FIG10B shows a schematic diagram of image 11.
  • the acquisition module may also acquire image parameters of the image 11, including part or all of the image name, retouching mode, file time (e.g., shooting time), retouching parameters, image ID, continuous shooting identifier, and retouching parameters, etc.
  • image parameters of the image 11 may refer to the relevant description of the image 1, which will not be described in detail here.
  • the cutting module of the terminal device 100 determines to cut the target subject image (ie, image 12) from the image 11 according to the subject refinement mode adopted by the image 11; and records the position of the image 12 in the image 11.
  • image 11 is the first photo taken by the terminal device 100 in portrait mode, and the portrait mode corresponds to the subject refinement mode.
  • FIG10B exemplarily shows a schematic diagram of cutting out a target subject image (ie, image 12) from image 11.
  • the cutting module of the terminal device 100 records the image parameters of the target main image (i.e., image 12), including the image ID of the target main image, the image ID of the corresponding full image (i.e., image 11), and the position in image 11 (i.e., the coordinates of the upper left corner, width, and height).
  • image parameters 2 of image 12 are as follows:
  • the continuous shooting recognition module of the terminal device 100 determines that the image 11 is not a non-first image of the continuous shooting scene.
  • image 11 is the first photo taken by the continuous shooting function, and the terminal device 100 determines that image 11 is not a non-first image in the continuous shooting scenario.
  • the terminal-side difference module determines that there is no need to perform difference on the target subject image of image 11 (ie, image 12 ), and records image 12 and the image ID of image 12 .
  • the end-side differential module triggers the terminal device 100 to send a request message 3 to the server 200, including the image 12, the image ID and the image type, where the image type indicates that the image 12 is a non-differential image.
  • step A5 The embodiment of the present application does not specifically limit the implementation of the interaction between modules in the terminal device 100.
  • step A5 please refer to the relevant description of step A5, which will not be repeated here.
  • image 11 adopts the subject refinement mode, and image 11 is not the non-first image of the continuous shooting scene; therefore, the terminal device 100 does not need to differentiate the target subject image of image 11, and directly uploads the complete image of the target subject image of image 11 to the server 200.
  • the request message 3 may also include the image name of the target subject image.
  • the image name, image type, and image ID carried in the request message 3 are as follows:
  • the cloud-based differential module determines that differential restoration is not required and records the image 12 and the image ID of the image 12.
  • the refinement module of the server 200 obtains a refined image of the image 12 (ie, image 13).
  • the retouching module of the server 200 performs retouching on the image 12 according to the image processing algorithm indicated by the retouching parameters in the request message 3, obtains the retouched image 13, and sends the image 13 to the cloud difference module.
  • the cloud-based differential module obtains a refined image of image 12 (ie, image 13) and a differential image of image 12 (ie, image 14), where image 12 is a reference image of the differential image.
  • the cloud-based differential module triggers the server 200 to send a response message 3 to the terminal device 100.
  • the response message 3 includes the image 14, the image ID of the image 14, the reference image ID, the image type and the differential algorithm identifier.
  • the image type indicates that the image 14 is a differential image.
  • the cloud-based differential module determines that the differential image (ie, image 14 ) has a smaller data volume than the refined image (ie, image 13 )
  • the differential image is transmitted back to the terminal device 100 via a response message 3 .
  • the image name, image type, differential algorithm identifier, image ID, and reference image ID carried in the response message 3 are as follows:
  • the end-side difference module determines that differential restoration is required according to the differential image indicated by the image type in the response message 1, and restores the image 13 according to the image 14, the image 12 indicated by the reference image ID, and the restoration algorithm indicated by the differential algorithm identifier.
  • image 13 ie, the refined image of image 12
  • image 14 ie, the differential image
  • the cutting module of the terminal device 100 determines that the image 12 indicated by the reference image ID is the target main image, uses the image 13 to replace the image 12 in the image 11 to obtain the image 15, and determines that the image 15 is the refined image of the image 11.
  • step B11 for the specific implementation of step B11, reference may be made to the relevant description of step A11, which will not be repeated here.
  • the display module of the terminal device 100 displays a refined image of the image 11 (ie, image 15).
  • the step B12 is optional.
  • the embodiment of the present application does not impose any specific limitation on the time for the terminal device 100 to display the image 1.
  • the acquisition module of the terminal device 100 acquires the image 16 to be processed.
  • image 16 is a second photo taken by the camera APP of the terminal device 100 using the subject refinement mode.
  • FIG10B shows a schematic diagram of image 16 .
  • the acquisition module may also acquire image parameters of the image 16, including part or all of the image name, retouching mode, file time, image ID, continuous shooting flag, and retouching parameters, etc.
  • image parameters of the image 16 including part or all of the image name, retouching mode, file time, image ID, continuous shooting flag, and retouching parameters, etc.
  • the cutting module of the terminal device 100 cuts the target subject image (ie, image 17) from the image 16 according to the subject refinement mode adopted by the image 16; and records the position of the image 17 in the image 16.
  • FIG. 10B exemplarily shows a schematic diagram of cutting out a target main body image (ie, image 17 ) from image 16 .
  • the cutting module of the terminal device 100 records the image parameters of the target main image, including the image ID of the target main image (i.e., image 17), the image ID of the corresponding full image (i.e., image 16), and the position in image 16 (i.e., the upper left corner coordinates, width, and height).
  • image parameters of image 17 are as follows:
  • the continuous shooting recognition module of the terminal device 100 determines that the image 16 is not the first image of the continuous shooting scene.
  • image 16 is the second photo taken by the continuous shooting function, and the terminal device 100 determines that image 16 is not a non-first image in the continuous shooting scene.
  • image 16 is a photo taken by the non-continuous shooting function, and the terminal device 100 determines that the shooting time interval between the photo and the previous photo (i.e., image 11) is less than the time threshold, or the similarity between the target subject image of the photo and the target subject image of the previous photo is greater than the similarity threshold; therefore, it is determined that image 16 is a non-first image in the continuous shooting scene.
  • the end-side differential module obtains a differential image (ie, image 18) between image 17 and image 12.
  • the terminal device 100 selects a reference image for the target subject image of image 16 (i.e., image 17) for differential, for example, the reference image is the target subject image of the previous photo (i.e., image 12).
  • the end-side differential module triggers the terminal device 100 to send a request message 4 to the server 200, including the image 18, the image ID of the image 18, the reference image ID, the source image ID, the image type and the differential algorithm identifier, where the image type indicates a differential image.
  • the cloud-based differential module determines that the differential image (ie, image 18 ) has a smaller data volume than the original image (ie, image 17 )
  • the differential image is uploaded to the server 200 via a request message 4 .
  • the request message 2 of step B5 and the request message 4 of step B17 may also be uploaded to the server 200 at the same time, that is, image 12 (i.e., the target subject image of the first photo) and image 18 (i.e., the differential image between the target subject image of the second photo and the target subject image of the first photo) may be uploaded to the server 200 at the same time.
  • image 12 i.e., the target subject image of the first photo
  • image 18 i.e., the differential image between the target subject image of the second photo and the target subject image of the first photo
  • the image name, image type, differential algorithm identifier, image ID, reference image ID, and source image ID carried in the request message 4 are as follows:
  • the end-side differential module determines that differential restoration is required based on the differential image indicated by the image type in the request message 4, restores image 17 based on image 18, image 12 indicated by the reference image ID, and the differential algorithm identifier, and records image 17 and the image ID of image 17.
  • the image ID of image 17 is the above-mentioned source image ID.
  • the refinement module of the server 200 obtains a refined image of the image 17 (ie, image 19).
  • the retouching module of the server 200 retouches the image 17 according to the image processing algorithm indicated by the retouching parameters in the request message 4, and obtains the retouched image 19.
  • the retouching module of the server 200 sends the image 19 to the cloud difference module.
  • step B7 and step B19 can be executed simultaneously, that is, image 12 and image 17 are refined simultaneously.
  • the cloud-based differential module obtains the refined image of image 17 (ie, image 19) and the differential image between image 17 (ie, image 20), and image 17 is the reference image of the differential image.
  • the end difference module triggers the server 200 to send a response message 4 to the terminal device 100, including the image 20, the image ID of the image 20, the reference image ID, the image type and the difference algorithm identifier, wherein the image type indicates that the image 20 is a difference image.
  • the cloud-based differential module transmits the differential image back to the terminal device 100 via the response message 4 only when it determines that the differential image (ie, image 20) has a smaller data volume than the refined image (ie, image 19).
  • the image name, image type, differential algorithm identifier, image ID, and reference image ID carried in the response message 4 are as follows:
  • the end-side difference module determines that differential restoration is required according to the differential image indicated by the image type in the response message 1, and restores the image 19 according to the image 20, the image 17 indicated by the reference image ID, and the restoration algorithm indicated by the differential algorithm identifier.
  • the cutting module of the terminal device 100 determines that the image 17 indicated by the reference image ID is the target main image, uses image 19 to replace image 17 in image 16 to obtain image 21, and determines that image 21 is a refined image of image 16.
  • the display module of the terminal device 100 displays a refined image of the image 16 (ie, image 21).
  • the step B24 is optional.
  • the embodiment of the present application does not impose any specific limitation on the time for the terminal device 100 to display the image 16 .
  • the acquisition module on the end side also acquires the third photo taken by the camera APP in the subject refinement mode.
  • the cutting module on the end side determines the target subject image for cutting the third photo according to the subject refinement mode adopted by the photo.
  • the differential module on the end side acquires the differential image corresponding to the target subject image of the third photo, and triggers the device to send the differential image to the server 200.
  • the differential image can be a differential image between the target subject image of the third photo and the target subject image of the second photo, or a differential image with the target subject image of the first photo, that is, the reference image of the differential image is the target subject image of the second photo or the target subject image of the first photo. Since the cloud-side differential module has acquired the target subject image of the first photo and the target subject image of the second photo, the target subject image of the third photo can be restored based on the above differential image and the reference image.
  • the subsequent cloud-based refinement and end-side backhaul processing of the third photo can refer to the relevant description of the second photo (i.e., image 16) in Figures 10A and 10B, and will not be repeated here.
  • an embodiment of the present application provides an end-cloud collaborative image processing method, which includes steps S201 to S208.
  • S201 The electronic device obtains a first image to be processed.
  • the electronic device sends a first area map of a first image to a server through a first request message; the first area map includes images within a partial or entire area of the first image.
  • S203 The electronic device obtains a second image to be processed, and the second region map includes images within a partial or entire region of the second image.
  • S204 The electronic device sends a first differential image to the server through a second request message, where the first differential image is a differential image between the second region image of the second image and the first region image of the first image.
  • S205 The server restores the second region map according to the first differential image and the first region map.
  • S206 The server performs image processing on the second region map to obtain a refined image of the second region map.
  • the server sends a second response message to the electronic device, where the second response message is used to indicate a refined image of the second area image.
  • the electronic device determines a refined image of the second image based on the refined image of the second region image.
  • the image to be processed involved in the aforementioned embodiment may be the first image, the second image or the third image.
  • the first image may also be the image a involved in the aforementioned embodiment, which is not the non-first image in the continuous shooting scene;
  • the second image/third image may be the image b involved in the aforementioned embodiment, that is, the non-first image in the continuous shooting scene.
  • the second response message includes a second differential image, which is a differential image between a refined image of the second region image and the second region image; the method also includes: the electronic device restoring the refined image of the second region image based on the second differential image and the second region image.
  • the second region map may be the aforementioned region map 1
  • the first difference image may be the aforementioned difference image 1
  • the second difference image may be the aforementioned difference image 2 .
  • the method also includes: the server performs image processing on the first area image to obtain a refined image of the first area image; the server sends a first response message to the electronic device, the first response message is used to indicate the refined image of the first area image; the electronic device determines the refined image of the first image based on the refined image of the first area image.
  • the first image and the first region map of the first image may be the aforementioned image 1; the first request message may be the aforementioned request message 1, and the first response message may be the aforementioned response message 1.
  • the second image and the second region map of the second image may be the aforementioned image 4, the first difference image may be the aforementioned image 5; the second request message may be the aforementioned request message 2, and the second response message may be the aforementioned response message 2.
  • the first image may be the aforementioned image 11, and the first region map of the first image may be the aforementioned image 12; the first request message may be the aforementioned request message 3, and the first response message may be the aforementioned response message 3.
  • the second image may be the aforementioned image 16, the second region map of the second image may be the aforementioned image 17, and the first difference image may be the aforementioned image 18; the second request message may be the aforementioned request message 4, and the second response message may be the aforementioned response message 4.
  • the electronic device is provided with a continuous shooting function, and the first image and the second image are two images taken in one continuous shooting function; the method further includes: the electronic device detects a first input operation for starting the continuous shooting function; the electronic device obtains the first image to be processed, and the electronic device obtains the second image to be processed, including: in response to the first input operation, the electronic device obtains the first image and the second image of the continuous shooting; the first image is the first image of the continuous shooting, and the second image is a non-first image of the continuous shooting.
  • the first input operation may include a long press operation on the shooting control 105 .
  • the electronic device acquires a first image to be processed, including: in response to a detected first shooting instruction, the camera of the electronic device captures the first image; the electronic device acquires a second image to be processed, including: in response to a detected second shooting instruction, the camera of the electronic device captures the second image; the shooting time interval between the second image and the first image is less than a time threshold, and/or the similarity between the second image and the first image is greater than a similarity threshold.
  • the time threshold is equal to 0.5s
  • the similarity threshold is equal to 80%.
  • the first shooting instruction and the second shooting instruction may both include a click operation acting on the shooting control 105 .
  • the method also includes: the electronic device obtains a third image to be processed; the electronic device sends a third differential image to the server through a third request message, the third differential image being a differential image between a third area map of the third image and a first area map of the first image; the third area map includes images within part or all of the area of the third image; the server restores the third area map based on the third differential image and the first area map; the server performs image processing on the third area map to obtain a refined image of the third area map; the server sends a third response message to the electronic device, the third response message being used to indicate the refined image of the third area map; the electronic device determines the refined image of the third image based on the refined image of the third area map.
  • the first image may be the first photo taken
  • the second image may be the second photo taken
  • the third image may be the nth photo taken (eg, the third photo).
  • the method also includes: the electronic device obtains a third image to be processed; the electronic device sends a third differential image to the server through a third request message, the third differential image being a differential image between a third area map of the third image and a second area map of the second image; the third area map includes images within part or all of the area of the third image; the server restores the third area map based on the third differential image and the second area map; the server performs image processing on the third area map to obtain a refined image of the third area map; the server sends a third response message to the electronic device, the third response message being used to indicate the refined image of the third area map; the electronic device determines the refined image of the third image based on the refined image of the third area map.
  • the first image may be the first photo taken
  • the second image may be the second photo taken
  • the third image may be the third photo taken.
  • the terminal side may upload to the cloud a differential image between the third image and any image that has been uploaded to the cloud.
  • the electronic device sends the first differential image to the server via a second request message, including: when the data volume of the first differential image is less than the data volume of the second area map, the electronic device sends the first differential image and the image ID of the first area map to the server via the first request message; when the data volume of the second differential image is less than the data volume of the refined image of the second area map, the second response message includes the second differential image and the image ID of the second area map, and the second differential image is a differential image between the refined image of the second area map and the second area map. image; when the data amount of the second differential image is less than the data amount of the refined image of the second area map, the second response message includes the refined image of the second area map.
  • the region map of the image includes the image content in the entire region of the image; when the second image refinement mode is the full image refinement mode, the second image and the second region map are the same, and the refined map of the second region map is the refined map of the second image.
  • the first image and the first region map of the first image are both image 1; the second image and the second region map of the second image are both image 4.
  • the method when the image refinement mode is the subject refinement mode, the region map of the image includes the target subject in the image, and the size of the region map of the image is smaller than the size of the image; when the second image refinement mode is the subject refinement mode, before the electronic device sends the first differential image to the server through the second request message, the method also includes: the electronic device obtains the position of the second region map in the second image, and crops the second region map from the second image; the electronic device determines the refined image of the second image based on the refined image of the second region map, including: based on the position of the second region map in the second image, the electronic device replaces the second region map in the second image with the refined image of the second region map, and obtains the refined image of the second image.
  • the first image can be the aforementioned image 11
  • the first region map of the first image can be the aforementioned image 12
  • the refined image of the first region map can be the aforementioned image 13
  • the refined image of image 11 can be obtained by replacing image 12 in image 11 with image 13.
  • the first request message includes a first photo retouching parameter, and the first photo retouching parameter is used to indicate at least one image processing method adopted for the second image; the above-mentioned server performs image processing on the second area image of the second image to obtain a refined image of the second area image, including: the server performs image processing on the second image according to at least one image processing method indicated by the first photo retouching parameter to obtain a refined image of the second area image.
  • the electronic device pre-stores a correspondence between multiple image processing methods and refinement modes, the refinement modes including a full-image refinement mode and a subject refinement mode;
  • the above-mentioned multiple image processing methods include multiple shooting modes, the above-mentioned multiple shooting modes include a portrait mode, and the refinement mode corresponding to the portrait mode is the subject refinement mode;
  • the image processing method adopted by the image is used to determine the refinement mode of the image.
  • the first request message also includes an image ID of the first area map
  • the second request message also includes a reference image ID and a source image ID of the first differential image, the reference image of the first differential image is the first area map, and the source image of the first differential image is the second area map
  • the server restores the second area map based on the first differential image and the first area map, including: the server restores the second area map based on the first differential image and the first area map indicated by the reference image ID, and determines that the image ID of the restored second area map is the source image ID
  • the second response message includes the image ID of the second area map.
  • the second request message further includes an algorithm identifier, the algorithm identifier indicating the first restoration algorithm; the server restores the second region map according to the first differential image and the first region map, including: the server restores the second region map according to the first differential image, the first region map and the first restoration algorithm.
  • the algorithm identifier may be the aforementioned differential algorithm identifier.
  • FIG11 shows a schematic diagram of the structure of the terminal device 100.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the terminal device 100.
  • the terminal device 100 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated into one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or is cyclically using. The instruction or data can be directly called from the memory when it is used for the second time, thus avoiding repeated access and reducing the waiting time of the processor 110, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple groups of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 communicates with the touch sensor 180K through the I2C bus interface, thereby realizing the touch function of the terminal device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 can include multiple I2S buses.
  • the processor 110 can be coupled to the audio module 170 via the I2S bus to achieve communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 via the I2S interface to achieve the function of answering a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 can be coupled via a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 via the PCM interface to implement the function of answering calls via a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate via the CSI interface to implement the shooting function of the terminal device 100.
  • the processor 110 and the display screen 194 communicate via the DSI interface to implement the display function of the terminal device 100.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the terminal device 100, and can also be used to transmit data between the terminal device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration and does not constitute a structural limitation on the terminal device 100.
  • the terminal device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from a wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the terminal device 100. While the charging management module 140 is charging the battery 142, it may also power the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc.
  • the power management module 141 can also be set in the processor 110.
  • the power management module 141 and the charging management module 140 can also be set in the same device.
  • the wireless communication function of the terminal device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in terminal device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G applied to the terminal device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, and filter, amplify, and process the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 can be set in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 can be set in the same device as at least some of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR) and the like applied to the terminal device 100.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, demodulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, modulate the frequency of the signal, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2.
  • the antenna 1 of the terminal device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation system
  • the terminal device 100 implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the terminal device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the terminal device 100 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • ISP is used to process the data fed back by camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to ISP for processing and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on the noise and brightness of the image. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, ISP can be set in camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object is projected onto the photosensitive element through the lens to generate an optical image.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS).
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the light signal into an electrical signal, which is then transmitted to the ISP for conversion into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the terminal device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals, and can process not only digital image signals but also other digital signals. For example, when the terminal device 100 is selecting a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital videos.
  • the terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record videos in multiple coding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • applications such as intelligent cognition of the terminal device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the internal memory 121 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM).
  • RAM random access memories
  • NVM non-volatile memories
  • Random access memory may include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM, for example, the fifth generation DDR SDRAM is generally referred to as DDR5 SDRAM), etc.; non-volatile memory may include disk storage devices and flash memory (flash memory).
  • SRAM static random-access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • non-volatile memory may include disk storage devices and flash memory (flash memory).
  • Flash memory can be divided into NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. according to the operating principle; can be divided into single-level cell (SLC), multi-level cell (MLC), triple-level cell (TLC), quad-level cell (QLC), etc. according to the storage unit potential level; can be divided into universal flash storage (UFS), embedded multi media Card (eMMC), etc. according to the storage specification.
  • SLC single-level cell
  • MLC multi-level cell
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • eMMC embedded multi media Card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of the operating system or other running programs, and can also be used to store user and application data, etc.
  • the non-volatile memory may also store executable programs and user and application data, etc., and may be loaded into the random access memory in advance for direct reading and writing by the processor 110 .
  • the external memory interface 120 can be used to connect to an external non-volatile memory to expand the storage capacity of the terminal device 100.
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are stored in the external non-volatile memory.
  • the terminal device 100 can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the terminal device 100 can listen to music or listen to a hands-free call through the speaker 170A.
  • the receiver 170B also called a "handset" is used to convert audio electrical signals into sound signals.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to the microphone 170C to input the sound signal into the microphone 170C.
  • the terminal device 100 can be provided with at least one microphone 170C. In other embodiments, the terminal device 100 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the terminal device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the source of sound, realize directional recording function, etc.
  • the earphone interface 170D is used to connect a wired earphone.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. 180A can be disposed on the display screen 194. There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like.
  • the gyro sensor 180B can be used to determine the motion posture of the terminal device 100. In some embodiments, the angular velocity of the terminal device 100 around three axes (ie, x, y, and z axes) can be determined by the gyro sensor 180B. The gyro sensor 180B can be used for anti-shake shooting.
  • the air pressure sensor 180C is used to measure air pressure.
  • the terminal device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor, and the terminal device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three axes).
  • the distance sensor 180F is used to measure the distance.
  • the terminal device 100 can measure the distance by infrared or laser.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the sensed brightness of the ambient light.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal device 100 can use the collected fingerprint characteristics to achieve fingerprint unlocking, access application locks, fingerprint photography, fingerprint answering calls, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the terminal device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy.
  • the touch sensor 180K is also called a "touch control device”.
  • the touch sensor 180K can be set on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch control screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K can also be set on the surface of the terminal device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of a human vocal part.
  • the key 190 includes a power key, a volume key, etc.
  • the key 190 may be a mechanical key or a touch key.
  • the terminal device 100 may receive key input and generate key signal input related to user settings and function control of the terminal device 100.
  • Motor 191 can generate vibration prompts. Motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • Indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • FIG12 exemplarily shows a structure of a server 200 provided in an embodiment of the present application.
  • the server 200 may include: one or more processors 1001, a memory 1002, a communication interface 1003, a transmitter 1005, a receiver 1006, a coupler 1007, and an antenna 1008. These components may be connected via a bus 1004 or other means, and FIG12 takes the connection via a bus as an example. Among them:
  • the communication interface 1003 can be used for the server 200 to communicate with other communication devices, such as the terminal device 100.
  • the communication interface 1003 can be a 3G communication interface, a 4G communication interface, a 5G communication interface, or a communication interface of a future new air interface.
  • the server 200 can also be configured with a wired communication interface 1003, such as a local access network (LAN) interface.
  • the transmitter 1005 can be used to transmit and process the signal output by the processor 1001.
  • the receiver 1006 can be used to receive and process the mobile communication signal received by the antenna 1008.
  • the transmitter 1005 and the receiver 1006 can be regarded as a wireless modem.
  • the number of the transmitter 1005 and the receiver 1006 can be one or more.
  • the antenna 1008 can be used to convert the electromagnetic energy in the transmission line into electromagnetic waves in the free space, or convert the electromagnetic waves in the free space into electromagnetic energy in the transmission line.
  • the coupler 1007 is used to divide the mobile communication signal received by the antenna 1008 into multiple paths and distribute them to multiple receivers 1006.
  • the memory 1002 is coupled to the processor 1001 and is used to store various software programs and/or multiple sets of instructions.
  • the memory 1002 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 1002 may store a network communication program, which may be used to communicate with one or more additional devices, one or more terminal devices, or one or more network devices.
  • the memory 1002 may be used to store an implementation program of the application distribution method provided by one or more embodiments of the present application on the server 200 side.
  • the implementation of the application distribution method provided by one or more embodiments of the present application please refer to the above embodiments.
  • the processor 1001 can be used to read and execute computer-readable instructions. Specifically, the processor 1001 can be used to call a program stored in the memory 1002, such as an implementation program of the application distribution method provided in one or more embodiments of the present application on the server 200 side, and execute the instructions contained in the program.
  • a program stored in the memory 1002 such as an implementation program of the application distribution method provided in one or more embodiments of the present application on the server 200 side, and execute the instructions contained in the program.
  • server 200 shown in FIG. 12 is only one implementation of the embodiment of the present application. In actual applications, the server 200 may also include more or fewer components, which is not limited here.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from one website site, computer, server or data center to another website site, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media integrated.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid state drive (SSD)), etc.
  • SSD solid state drive
  • the processes can be completed by a computer program to instruct the relevant hardware, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed, it can include the processes of the above-mentioned method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Facsimiles In General (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image basé sur une collaboration d'extrémité à nuage et un appareil associé. Le procédé comprend les étapes suivantes : un dispositif électronique obtient une première image à traiter ; le dispositif électronique envoie une première image de région de la première image à un serveur au moyen d'un premier message de demande ; le dispositif électronique obtient une deuxième image à traiter ; le dispositif électronique envoie une première image différentielle au serveur au moyen d'un deuxième message de demande, la première image différentielle étant une image différentielle entre une deuxième image de région et la première image de région ; le serveur restaure la deuxième image de région selon la première image différentielle et la première image de région ; le serveur effectue un traitement d'image sur la deuxième image de région pour obtenir une image affinée de la deuxième image de région ; le serveur envoie un deuxième message de réponse au dispositif électronique, le deuxième message de réponse étant utilisé pour indiquer l'image affinée de la deuxième image de région ; le dispositif électronique détermine une image affinée de la deuxième image sur la base de l'image affinée de la deuxième image de région. De cette manière, le trafic de données pour la transmission d'image peut être efficacement réduit, un retard de traitement peut être réduit, et un traitement d'image à haut rendement peut être obtenu en coopération avec le nuage.
PCT/CN2024/082306 2023-03-21 2024-03-18 Procédé de traitement d'image basé sur une collaboration d'extrémité à nuage et appareil associé WO2024193523A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202310354227 2023-03-21
CN202310354227.5 2023-03-21
CN202310809249.6A CN118695090A (zh) 2023-03-21 2023-06-30 端云协同的图像处理方法及相关装置
CN202310809249.6 2023-06-30

Publications (1)

Publication Number Publication Date
WO2024193523A1 true WO2024193523A1 (fr) 2024-09-26

Family

ID=92763518

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/082306 WO2024193523A1 (fr) 2023-03-21 2024-03-18 Procédé de traitement d'image basé sur une collaboration d'extrémité à nuage et appareil associé

Country Status (2)

Country Link
CN (1) CN118695090A (fr)
WO (1) WO2024193523A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949898A (zh) * 2019-02-19 2019-06-28 东软医疗系统股份有限公司 文件上传方法、存储方法、下载方法、系统、装置及设备
CN112712465A (zh) * 2020-12-31 2021-04-27 四川长虹网络科技有限责任公司 一种优化拍照式抄表终端通信数据量的方法和系统
CN113038002A (zh) * 2021-02-26 2021-06-25 维沃移动通信有限公司 图像处理方法、装置、电子设备及可读存储介质
CN114119392A (zh) * 2021-11-09 2022-03-01 维沃移动通信有限公司 图像处理方法、装置和电子设备
US20220078154A1 (en) * 2019-01-24 2022-03-10 Huawei Technologies Co., Ltd. Image sharing method and mobile device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220078154A1 (en) * 2019-01-24 2022-03-10 Huawei Technologies Co., Ltd. Image sharing method and mobile device
CN109949898A (zh) * 2019-02-19 2019-06-28 东软医疗系统股份有限公司 文件上传方法、存储方法、下载方法、系统、装置及设备
CN112712465A (zh) * 2020-12-31 2021-04-27 四川长虹网络科技有限责任公司 一种优化拍照式抄表终端通信数据量的方法和系统
CN113038002A (zh) * 2021-02-26 2021-06-25 维沃移动通信有限公司 图像处理方法、装置、电子设备及可读存储介质
CN114119392A (zh) * 2021-11-09 2022-03-01 维沃移动通信有限公司 图像处理方法、装置和电子设备

Also Published As

Publication number Publication date
CN118695090A (zh) 2024-09-24

Similar Documents

Publication Publication Date Title
WO2020233553A1 (fr) Procédé de photographie et terminal
WO2021052232A1 (fr) Procédé et dispositif de photographie à intervalle de temps
WO2021213120A1 (fr) Procédé et appareil de projection d'écran et dispositif électronique
WO2021093793A1 (fr) Procédé de capture et dispositif électronique
WO2020244495A1 (fr) Procédé d'affichage par projection d'écran et dispositif électronique
WO2020192461A1 (fr) Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique
WO2020224485A1 (fr) Procédé de capture d'écran et dispositif électronique
WO2021129198A1 (fr) Procédé de photographie dans un scénario à longue distance focale, et terminal
WO2020244623A1 (fr) Procédé de mise en œuvre de mode de souris 3d et dispositif associé
WO2021143269A1 (fr) Procédé photographique dans un scénario à longue distance focale, et terminal mobile
WO2021042978A1 (fr) Procédé de commutation de thème et appareil de commutation de thème
WO2022127787A1 (fr) Procédé d'affichage d'image et dispositif électronique
CN111132234A (zh) 一种数据传输方法及对应的终端
WO2022100685A1 (fr) Procédé de traitement de commande de dessin et dispositif associé
EP4060603A1 (fr) Procédé de traitement d'image et appareil associé
CN113891009B (zh) 曝光调整方法及相关设备
WO2022148319A1 (fr) Procédé et appareil de commutation vidéo, support de stockage et dispositif
CN112150499B (zh) 图像处理方法及相关装置
EP4293997A1 (fr) Procédé d'affichage, dispositif électronique et système
WO2021057626A1 (fr) Procédé de traitement d'image, appareil, dispositif et support de stockage informatique
WO2023241209A9 (fr) Procédé et appareil de configuration de papier peint de bureau, dispositif électronique et support de stockage lisible
WO2021204103A1 (fr) Procédé de prévisualisation d'images, dispositif électronique et support de stockage
CN113593567A (zh) 视频声音转文本的方法及相关设备
CN112532508B (zh) 一种视频通信方法及视频通信装置
CN116055859B (zh) 图像处理方法和电子设备