WO2023138509A1 - 图片处理方法和装置 - Google Patents

图片处理方法和装置 Download PDF

Info

Publication number
WO2023138509A1
WO2023138509A1 PCT/CN2023/072145 CN2023072145W WO2023138509A1 WO 2023138509 A1 WO2023138509 A1 WO 2023138509A1 CN 2023072145 W CN2023072145 W CN 2023072145W WO 2023138509 A1 WO2023138509 A1 WO 2023138509A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
input
area
elements
interface
Prior art date
Application number
PCT/CN2023/072145
Other languages
English (en)
French (fr)
Inventor
刘少玲
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023138509A1 publication Critical patent/WO2023138509A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results

Definitions

  • the present application belongs to the technical field of image processing, and specifically relates to an image processing method and device.
  • map applications When traveling, users often use map applications for navigation. For example, the user can first enter the map application to check the route, and save the screenshot of the route, so that the user can view the overall route map without opening the map application on the way.
  • the zoom of the screenshot can only be the zoom of the original image itself, and cannot view more detailed and rich road details, or browse a wider map area. If the user needs more detailed and rich road details, or browses a wider map area, he can only open the map application, re-initiate the navigation, and then view more road details in the map application, which is cumbersome.
  • the purpose of the embodiments of the present application is to provide a method and device for image processing, which can solve the problem that the user cannot view more detailed content or browse a wider area when zooming in on a screenshot.
  • the embodiment of the present application provides a picture processing method, the method comprising:
  • the first picture In response to the first input, if the first picture includes an associated picture, according to the associated picture and the first picture, generate a second picture and display the second picture;
  • the first picture is enlarged or reduced according to the input parameters of the first input.
  • an image processing device which includes:
  • a receiving module configured to receive a user's first input on the first picture, and the first input is used to enlarge or reduce at least part of the first picture;
  • a generating module configured to, in response to the first input, generate a second picture according to the associated picture and the first picture when the first picture includes an associated picture;
  • a display module configured to display the second picture
  • a control module configured to zoom in or out the first picture according to the first input parameter when the first picture does not include an associated picture.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor and a memory, the memory stores programs or instructions that can run on the processor, and when the programs or instructions are executed by the processor, the steps of the method described in the first aspect are implemented.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented.
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run programs or instructions to implement the method described in the first aspect.
  • an embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the method described in the first aspect.
  • the electronic device when the electronic device receives the user's playback of at least part of the first picture
  • the input of zooming in or out will generate a second picture based on the related picture and the first picture when the first picture includes an associated picture, and display the second picture, so that when the user zooms in or out on the first picture, he can view more local map details or global map routes through the generated second picture, making the operation more convenient.
  • FIG. 1 is a flow chart of an image processing method provided in an embodiment of the present application
  • Fig. 2 is one of the schematic diagrams of the display interface of the electronic device provided by the embodiment of the present application;
  • Fig. 3a is the second schematic diagram of the display interface of the electronic device provided by the embodiment of the present application.
  • Fig. 3b is the third schematic diagram of the display interface of the electronic device provided by the embodiment of the present application.
  • Fig. 4 is the fourth schematic diagram of the display interface of the electronic device provided by the embodiment of the present application.
  • Fig. 5 is the fifth schematic diagram of the display interface of the electronic device provided by the embodiment of the present application.
  • FIG. 6 is a flowchart of an example image processing method provided by the present application.
  • FIG. 7 is a flow chart of another example image processing method provided by the present application.
  • FIG. 8 is a flowchart of an image processing device provided in an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of an electronic device provided by another embodiment of the present application.
  • first”, “second” and the like in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific sequence or sequence. It should be understood that the data so used are interchangeable under appropriate circumstances, so that the embodiments of the present application can be used except as herein
  • the sequence other than those shown or described is implemented, and the objects distinguished by "first”, “second” and so on are generally one type, and the number of objects is not limited, for example, there may be one first object, or there may be multiple ones.
  • “and/or” in the specification and claims means at least one of the connected objects, and the character “/” generally means that the related objects are an "or” relationship.
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present application.
  • the method can be applied to an electronic device, and the electronic device can be a mobile phone, a tablet computer, a notebook computer, and the like.
  • the method may include steps 1100 to 1300, which will be described in detail below.
  • Step 1100 receiving a first input from a user on a first picture, where the first input is used to zoom in or out at least part of the first picture.
  • the first picture is a screenshot generated by taking a screenshot of the first interface displayed on the display screen of the electronic device, and the first picture is located in the photo album. For example, what is displayed on the display screen of the electronic device is the first interface for the user to navigate through the map application. The first interface includes the navigation route for the user to navigate through the map application. Then the first picture is a screenshot of the navigation route.
  • the first input may be an input of at least a partial enlargement of the first picture, for example, the first input may be a swipe input performed by a thumb and an index finger.
  • the first input may also be an input of at least a partial zoom-out of the first picture, for example, the first input may be a pinching input performed by the thumb and index finger.
  • the image processing method in the embodiment of the disclosure further includes the following steps 2100 to 2300:
  • Step 2100 when the first interface is displayed, receive a second input from the user, where the second input is used to take a screenshot of the first interface.
  • the second input is a screenshot input for taking a screenshot of the first interface.
  • the user can trigger the second input through physical combination keys, shortcut keys in the status bar, shortcut function buttons on the first interface, and the like.
  • Step 2200 generating the first picture in response to the second input.
  • the electronic device when the electronic device receives the screenshot input of the first interface by the user, it can generate the first picture in response to the screenshot input of the first interface by the user.
  • the user can trigger the screenshot input through the above physical combination keys, shortcut keys on the status bar, shortcut function buttons on the display interface, etc., to control the electronic device to take screenshots and generate the first picture.
  • the map application can provide the navigation route of the starting location "A station” and the ending location "C station".
  • the user may input a control command through the physical key combination of the electronic device to control the electronic device to take a screenshot, and then generate a first picture containing the navigation route, the first picture may be as shown in FIG. 2 .
  • Step 2300 if the first picture satisfies the preset condition, generate an associated picture associated with the first picture.
  • the electronic device when the electronic device generates the first picture, it will also detect whether the first picture satisfies the preset condition, and if the first picture satisfies the preset condition, generate an associated picture associated with the first picture.
  • the associated picture may be the picture shown in FIG. 3a and FIG.
  • the first area of the first picture corresponds to the second area of the associated picture.
  • the first area of the first picture is the area where the navigation route is located
  • the second area of the associated picture is also the area where the navigation route is located, that is, the range of the map indicated by the first area of the first picture and the second area of the second picture is the same.
  • the frame area formed by point A in the upper left corner, point B in the lower right corner, point C in the lower left corner, and point D in the lower right corner is the area of "Station A” and the terminal position "Station C”.
  • the navigation route area is the first area.
  • the frame area formed by A1 point in the upper left corner, B1 point in the upper right corner, C1 point in the lower left corner, and D1 point in the lower right corner in the associated picture shown in Figure 3a is the second area.
  • the frame area formed by point A2 in the upper left corner, point B2 in the upper right corner, point C2 in the lower left corner, and point D2 in the lower right corner in the associated picture shown in Figure 3b is the second area.
  • the range of the map indicated by the first area of the first picture and the second area of the associated picture is the same, for example, the leftmost boundary of the first area is 200m behind station C, the rightmost boundary of the first area is 500m behind station A, then the leftmost boundary of the second area is also 200m behind station C, and the rightmost boundary of the second area is also 500m behind station A.
  • the range of the map indicated by the first area of the first picture and the second area of the associated picture is also the same.
  • the associated picture may include a first associated picture and a second associated picture, and each of the first picture, the first associated picture, and the second associated picture includes the first element.
  • the number of first elements in the first associated picture is greater than the number of first elements in the first picture
  • the number of first elements in the second associated picture is smaller than the number of first elements in the first picture.
  • the first related picture may be the related picture shown in FIG. 3a
  • the second related picture may be the related picture shown in FIG. 3b.
  • the above first element may be an element set according to actual application scenarios and actual requirements.
  • the first elements may be subway stations on the subway navigation route, and the number of first elements in the first picture may be the number of subway stations on the subway navigation route in the first picture.
  • the number of the first elements in the first associated picture may be the number of subway stations on the subway navigation route in the first associated picture.
  • the number of first elements in the second associated picture may be the number of subway stations on the subway navigation route in the second associated picture.
  • the subway stations on the subway navigation route in the first picture shown in FIG. 2 include "Station A”, “Station B” and “Station C", then the number of first elements in the first picture is three.
  • the number of first elements in the first associated picture shown in Figure 3a is 11, and that shown in Figure 3b indicates that the number of first elements in the second associated picture is 2.
  • generating an associated picture associated with the first picture may further include: when the number of first elements in the first picture is the first number, performing a target operation, and the target operation includes at least one of the following: zooming in on the first interface to generate the first associated picture, and shrinking the first interface to generate the second associated picture; wherein, the number of first elements in the first associated picture is greater than the first number, and the number of first elements in the second associated picture is smaller than the first number.
  • whether the electronic device performs one or two of the target operations is related to the size of the first quantity. For example, when the first number is within the first range, both the first associated picture and the second associated picture may be generated. For another example, when the first number is within the second range, only the first associated picture may be generated. For another example, when the first number is within the third range, only the second associated picture may be generated.
  • the above first range, second range and third range do not overlap. Specifically, the minimum value of the second range may be greater than the maximum value of the first range, and the minimum value of the third range may be smaller than the maximum value of the second range.
  • the first range, the second range and the third range may be values set according to actual application scenarios and actual requirements.
  • the electronic device when the electronic device generates the first picture, it will also detect the number of the first elements in the first picture, and when the number of the first elements in the first picture is in the first range, enlarge the first interface to generate the first associated image, and zoom out the first interface to generate the second associated image.
  • the electronic device when the electronic device generates the first picture as shown in FIG. 2 , it also detects that the number of first elements in the first picture is 3 (the number 3 is within the first range), then The first interface is zoomed in to generate the first associated picture, and the first interface is zoomed out to generate the second picture.
  • This first related picture can be the related picture as shown in Figure 3a, and this first related picture comprises the second area, and, comprises richer road details in this first related picture, promptly comprises " A station ", " B station “, C station “, A1 station “, A2 station “, A3 station “, A4 station “, A5 station “, " B1 station “, “ B2 station “, B3 station “, that is, the quantity of the first element in the first related picture is 11, greater than the first quantity 3.
  • This second related picture can be as shown in Figure 3
  • the second related picture includes the second area, and the second related picture involves a wider range of areas, for example, including the panorama of "Station C", and at the same time, the number of first elements in the second related picture is 2, which is less than the first number 3.
  • the electronic device will also write the coordinate mapping relationship between the first picture and the associated picture in the attribute information of the first picture.
  • the box area formed by the upper left corner A, the upper right corner B, the lower left corner C, and the lower right corner D and the box area formed by the upper left corner A1, the upper right corner B1, the lower left corner C1, and the lower right corner D1 point to the same range in the map, but the scale in the map interface is different, resulting in a certain difference in the area of the box area.
  • Figure 2 and Figure 3b record the coordinates of the first area of the first picture, that is, the coordinates of point A in the upper left corner, the coordinates of point B in the upper right corner, the coordinates of point C in the lower left corner, the coordinates of point D in the lower right corner, and the coordinates of the second area of the second associated picture, that is, the coordinates of point A2 in the upper left corner, the coordinates of point B2 in the upper right corner, the coordinates of point C2 in the lower left corner, and the coordinates of point D2 in the lower right corner, and establish point A and point A2, point B and point B2, point C The mapping relationship between point C2 and point D and point D2.
  • a in the upper left corner, B in the lower left corner, C in the upper right corner, and D in the lower right corner constitute The formed frame area is the same as the area in the map pointed to by the frame area formed by the upper left corner A2, upper right corner B2, lower left corner C2, and lower right corner D2, but the scale in the map interface is different, resulting in a certain difference in the area of the frame area.
  • the electronic device when the user initiates a screenshot, the electronic device will not only generate the current screenshot, but also enlarge the first interface to generate the first associated image, and zoom out the first interface to generate the second associated image, so that more detailed and rich road details can be viewed through the first associated image and the second associated image, or a wider map area can be browsed.
  • Step 1200 in response to the first input, if the first picture includes an associated picture, generate a second picture according to the associated picture and the first picture, and display the second picture.
  • generating a second picture and displaying the second picture according to the associated picture and the first picture in step 1200 may further include the following steps 1210a to 1230a:
  • Step 1210a acquire the first area of the first picture and the second area of the associated picture; wherein, the first area corresponds to the second area.
  • Example 1 when the first input is at least partially enlarged input of the first picture shown in FIG. 2 , the electronic device will replace the first area of the first picture shown in FIG. 2 with the second area of the first associated picture shown in FIG. 3a.
  • Example 2 when the first input is at least partially reduced input of the first picture shown in FIG. 2 , the electronic device will replace the first area of the first picture shown in FIG. 1 with the second area of the second associated picture shown in FIG. 3b.
  • the frame area formed by point C in the lower left corner and point D in the lower right corner is the first area, and the frame area formed by point A2 in the upper left corner, point B2 in the upper right corner, point C2 in the lower left corner, and point D2 in the lower right corner in the second associated picture shown in FIG.
  • Step 1220a replace the first area of the first picture with the second area, so as to generate the second picture.
  • step 1220a after acquiring the first area of the first picture and the second area of the associated picture, the electronic device replaces the first area with the second area to generate the second picture.
  • the scales of the first area and the second area are different, resulting in differences in areas indicating the same map range.
  • the size corresponding to the second area may be first enlarged or reduced to a certain extent, so that the adjusted size corresponding to the second area matches the image size corresponding to the first area.
  • the adjusted second area can be completely combined with other areas in the first picture (areas other than the first area) to form a new second picture.
  • the electronic device after the electronic device obtains the first area of the first picture shown in FIG. 2 and the second area of the associated picture shown in FIG. 3a, it can replace the first area of the first picture shown in FIG. 2 with the second area of the associated picture shown in FIG. 3a to obtain the second picture shown in FIG.
  • the electronic device after the electronic device obtains the first area of the first picture shown in FIG. 2 and the second area of the associated picture shown in FIG. 3b, it can replace the first area of the first picture shown in FIG. 2 with the second area of the associated picture shown in FIG. 3b to obtain the second picture shown in FIG.
  • Step 1230a display the second picture.
  • the second picture after the second picture is generated, the second picture can be displayed on the display screen of the electronic device.
  • steps 1210a to 1230a it will obtain the first area of the first picture and Associating the second area of the picture, since the first area corresponds to the second area, the first area of the first picture can be replaced with the second area to generate the second picture, and then view more local map details or view the global map route through the second picture, and the operation is more convenient.
  • the associated picture includes the above first associated picture and the above second associated picture.
  • the first picture, the first associated picture and the second associated picture all include the first element.
  • the number of first elements in the first associated picture is greater than the number of first elements in the first picture, and the number of first elements in the second associated picture is smaller than the number of first elements in the first picture.
  • generating a second picture and displaying the second picture may further include the following steps 1210b to 1220b:
  • Step 1210b if the first input is at least partially enlarged input of the first picture, generate the second picture according to the first associated picture and the first picture, or, in the case of the first input is an input of at least partially zooming out the first picture, generate the second picture according to the second associated picture and the first picture.
  • the electronic device since the related pictures include the first related picture and the second related picture, the electronic device will first determine whether to combine the first related picture or the second related picture to generate the second picture according to the user's first input, and then generate the second picture according to the selected first related picture or the second related picture.
  • Example 1 when the user wants to view more subway stations on the subway navigation route through the first picture shown in FIG. 1 , at least a partially enlarged input of the first picture will be implemented, such as a slash input implemented by the user's thumb and forefinger.
  • the electronic device receives an input from the user to at least partially enlarge the first picture, since the number of first elements in the first associated picture shown in FIG. 3a is greater than the number of first elements in the first picture shown in FIG. 1 , the electronic device generates a second picture based on the first picture and the first associated picture. For example, the first area of the first picture shown in FIG. 1 is replaced with the second area of the first associated picture shown in FIG. 3a to obtain the second picture shown in FIG. 4, through which more subway stations can be viewed.
  • Example 2 when the user wants to view a wider map area through the first picture shown in Figure 1, An at least partially zoomed-out input of the first picture is performed, such as a pinching input performed by the user's thumb and forefinger.
  • the electronic device receives an input from the user to at least partially shrink the first picture, since the number of first elements in the second associated picture shown in FIG. 3b is smaller than the number of first elements in the first picture shown in FIG. 1 , the electronic device generates a second picture based on the first picture and the second associated picture. For example, the first area of the first picture shown in FIG. 1 is replaced with the second area of the second associated picture shown in FIG. 3 b to obtain the second picture shown in FIG. 5 , through which a wider map area can be viewed.
  • Step 1220b display the second picture.
  • the second picture can be displayed on the display screen of the electronic device.
  • the electronic device will generate a second picture according to the first associated picture and the first picture. Since the first related picture is a picture generated by zooming in on the first interface, when the user performs a zoom operation on the first picture, he can view richer route details through the generated second picture. In the case that the first input is at least partially zooming out of the first picture, the electronic device generates a second picture according to the second associated picture and the first picture, and since the second related picture is a picture generated by zooming out the first interface, when the user performs a zoom-out operation on the first picture, a wider map area can be viewed through the generated second picture.
  • Step 1300 if the first picture does not include an associated picture, zoom in or out the first picture according to the input parameter of the first input.
  • the first picture can be enlarged or reduced directly according to the input parameters of the first input.
  • the electronic device can acquire the magnification factor of the first input, and enlarge the first picture according to the magnification factor.
  • the electronic device can obtain the first input reduction factor, and reduce the first picture according to the reduction factor.
  • the electronic device when the electronic device receives the user's input of zooming in or zooming out at least part of the first picture, if the first picture includes an associated picture, according to the associated picture and the first picture, it will generate a second picture and display the second picture, so that when the user zooms in or out on the first picture, he can use the generated second picture to view more local map details or view the global map route, making the operation more convenient.
  • the image processing method includes the following steps:
  • Step 601 when the first interface is displayed, receiving a user's screen capture input on the navigation interface.
  • Step 602 generating a first picture in response to the user's screen capture input on the navigation interface.
  • Step 603 acquiring the number of first elements in the first picture.
  • Step 604 if the number of the first elements is within the first range, zoom in on the navigation interface to generate a first associated image, and zoom out on the navigation interface to generate a second associated image.
  • Step 605 if the quantity of the first element is within the second range, shrink the navigation interface to generate a second associated picture.
  • Step 606 if the quantity of the first element is within the third range, enlarge the navigation interface to generate the first associated picture.
  • Step 607 write the coordinate mapping relationship between the first area of the first picture and the second area of the associated picture in the attribute information of the first picture.
  • Step 701 receiving a first input from a user on a first picture.
  • Step 702 when the first input is at least partially enlarged input of the first picture, check whether the first picture includes the first associated picture, and if so, perform step 703 , otherwise, perform step 704 .
  • Step 703 in the case that the first picture includes the first associated picture, according to the coordinate mapping relationship, load the corresponding first associated picture, replace the first area of the first image with the second area of the first associated image, obtain and display the second image, and the process ends.
  • Step 704 if the first picture does not include the first associated picture, the first picture is enlarged according to the first input magnification factor, and the process ends.
  • Step 705 when the first input is at least partially reduced input of the first picture, check whether the first picture includes the second associated picture, and if so, perform step 706 , otherwise, perform step 707 .
  • Step 706 if the first picture includes the second associated picture, load the corresponding second associated picture according to the coordinate mapping relationship, replace the first area of the first picture with the second area of the second associated picture, obtain the second picture, and display the second picture, and the process ends.
  • Step 707 if the first picture does not include the second associated picture, the first picture is reduced according to the first input reduction factor, and the process ends.
  • the image processing method provided in the embodiment of the present application may be executed by an image processing device.
  • the method for performing picture processing by the picture processing device is taken as an example to describe the picture processing device provided in the embodiment of the present application.
  • the embodiment of the present application further provides a picture processing device 800 , the picture processing device 800 includes a receiving module 810 , a generating module 820 , a display module 830 and a control module 840 .
  • the receiving module 810 is configured to receive a user's first input on the first picture, and the first input is used to zoom in or out at least part of the first picture.
  • the generating module 820 is configured to, in response to the first input, generate a second picture according to the associated picture and the first picture when the first picture includes an associated picture.
  • the display module 830 is configured to display the second picture.
  • the control module 840 is configured to zoom in or out the first picture according to the input parameters of the first input when the first picture does not include an associated picture.
  • the generating module 820 is specifically configured to: acquire the first picture The first area of the first picture and the second area of the associated picture; wherein, the first area corresponds to the second area; the first area of the first picture is replaced with the second area to generate the second picture.
  • the display module is specifically used for:
  • the associated picture includes a first associated picture and a second associated picture, and the first picture, the first associated picture, and the second associated picture all include the first element.
  • the formation module 820 is specifically used in the case where the first input is at least part of the first picture is enlarged, and the second picture is generated according to the first related picture and the first picture of the first image, or in the case of at least part of the first picture of the first picture, the second picture is generated according to the second related picture and the first picture.
  • the number of the first element described in the second related picture is less than the number of the first element in the first picture.
  • the display module 830 is specifically configured to: display the second picture.
  • the receiving module 810 is further configured to receive a second input from the user when the first interface is displayed, and the second input is used to take a screenshot of the first interface.
  • the generation module 820 is further configured to generate the first picture in response to the second input; and, if the first picture satisfies a preset condition, generate an associated picture associated with the first picture.
  • the first area in the first picture corresponds to the second area in the associated picture.
  • the associated pictures include a first associated picture and a second associated picture.
  • the generation module 820 is specifically configured to: perform a target operation when the number of first elements in the first picture is the first number, and the target operation includes at least one of the following: The first interface is enlarged to generate a first associated picture, and the first interface is reduced to generate a second associated image.
  • the number of the first elements in the first associated picture is greater than the first number, and the number of the first elements in the second associated picture is smaller than the first number.
  • the electronic device when the electronic device receives an input from the user to zoom in or zoom out at least part of the first picture, if the first picture includes an associated picture, it will generate a second picture based on the associated picture and the first picture and display the second picture, so that when the user zooms in or out on the first picture, he can use the generated second picture to view more details of a local map or view a global map route, making the operation more convenient.
  • the image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip.
  • the electronic device may be a terminal, or other devices other than the terminal.
  • the electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a mobile Internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA) ), etc., can also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc., which are not specifically limited in the embodiment of the present application.
  • the picture processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the image processing apparatus provided in the embodiment of the present application can realize various processes realized by the method embodiment in FIG. 1 , and details are not repeated here to avoid repetition.
  • the embodiment of the present application further provides an electronic device 900, including a processor 901 and a memory 902, and the memory 902 stores information that can be run on the processor 901.
  • the program or instruction is executed by the processor 901
  • each step of the above-mentioned image processing method embodiment can be achieved, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
  • the electronic device 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1000 through the power management system, so as to implement functions such as management charging, discharging, and power consumption management through the power management system.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 10 does not constitute a limitation on the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine some components, or arrange different components, which will not be repeated here.
  • the processor 1010 is configured to receive a first input from the user on the first picture, and the first input is used to enlarge or reduce at least part of the first picture; in response to the first input, if the first picture includes an associated picture, generate a second picture according to the associated picture and the first picture, and display the second picture; or, if the first picture does not include an associated picture, zoom in or out the first picture according to the input parameters of the first input.
  • the electronic device when the electronic device receives an input from the user to zoom in or zoom out at least part of the first picture, if the first picture includes an associated picture, it will generate a second picture based on the associated picture and the first picture and display the second picture, so that when the user zooms in or out on the first picture, he can use the generated second picture to view more details of a local map or view a global map route, making the operation more convenient.
  • the processor 1010 is further configured to acquire the first area of the first picture and the second area of the associated picture; wherein, the first area corresponds to the second area; replace the first area of the first picture with the second area to generate the second picture; and display the second picture.
  • the related picture includes a first related picture and a second related picture
  • the first picture, the first related picture and the second related picture all include the first element.
  • the processor 1010 is further configured to generate the second picture according to the first associated picture and the first picture when the first input is at least partially enlarged input of the first picture, or, in the case of at least partially reduced input to the first picture, generate the second picture according to the second associated picture and the first picture, wherein the number of the first elements in the first associated picture is greater than the number of the first elements in the first picture, and the number of the first elements in the second associated picture is smaller than the number of the first elements in the first picture; Display the second picture.
  • the processor 1010 is further configured to receive a second input from the user when the first interface is displayed, and the second input is used to take a screenshot of the first interface; in response to the second input, generate the first picture; if the first picture satisfies a preset condition, generate an associated picture associated with the first picture; wherein, the first area in the first picture corresponds to the second area in the associated picture.
  • the associated pictures include a first associated picture and a second associated picture.
  • the processor 1010 is further configured to perform a target operation when the number of first elements in the first picture is a first number, and the target operation includes at least one of the following: zooming in on the first interface to generate a first associated picture, and shrinking the first interface to generate a second associated picture; wherein, the number of the first elements in the first associated picture is greater than the first number, and the number of the first elements in the second associated picture is smaller than the first number.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 10042, and the graphics processor 10041 Image data of still pictures or videos obtained by an image capture device (such as a camera) in video capture mode or image capture mode is processed.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the memory 1009 can be used to store software programs as well as various data.
  • the memory 1009 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required by at least one function (such as a sound playback function, an image playback function, etc.) and the like.
  • memory 1009 may include volatile memory or nonvolatile memory, or, memory 1009 may include both volatile and nonvolatile memory.
  • the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or a flash memory.
  • ROM Read-Only Memory
  • PROM programmable read-only memory
  • Erasable PROM Erasable PROM
  • EPROM electrically erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM) RAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • Synchronous DRAM SDRAM
  • Double data rate synchronous dynamic random access memory Double Data Rate SDRAM, DDR SDRAM
  • Enhanced SDRAM enhanced synchronous dynamic random access memory
  • Synch link DRAM, SLDRAM synchronous connection dynamic random access memory
  • Direct Rambus RAM Direct Rambus RAM
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 integrates An application processor and a modem processor, wherein the application processor mainly processes operations related to the operating system, user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above-mentioned image processing method embodiment can be achieved, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk, and the like.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a program or an instruction to implement each process of the above image processing method embodiment, and can achieve the same technical effect. To avoid repetition, details are not repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • An embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the various processes in the above image processing method embodiment, and can achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is a better implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product in essence or the part that contributes to the prior art.
  • the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), and includes several instructions to make a terminal (which can be a mobile phone, computer, server, or network device, etc.) execute the method described in each embodiment of the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

公开了一种图片处理方法和装置。方法包括:接收用户对第一图片的第一输入,第一输入用于对第一图片的至少部分放大或缩小(1100);响应于第一输入,在第一图片包括有关联图片的情况下,根据关联图片和第一图片,生成第二图片并显示第二图片(1200);或者,在第一图片不包括有关联图片的情况下,根据第一输入的输入参数,对第一图片进行放大或缩小(1300)。

Description

图片处理方法和装置
相关申请的交叉引用
本申请要求于2022年01月18日提交的申请号为202210058534.4,发明名称为“图片处理方法和装置”的中国专利申请的优先权,其通过引用方式全部并入本申请。
技术领域
本申请属于图像处理技术领域,具体涉及一种图片处理方法和装置。
背景技术
在出行的时候,用户经常用到地图应用进行导航。例如用户可以先进入地图应用查询好路线,并将该路线截图保存下来,这样,用户在路途中无须打开地图应用即可查看整体的路线图。
相关技术中,用户在查看截图时,针对截图的缩放只能是原始图片本身的缩放,不能查看更细致丰富的道路细节,或浏览更广的地图区域范围。如果用户需要更细致丰富的道路细节,或浏览更广的地图区域范围,其只能打开地图应用,重新发起导航,然后在地图应用中去查看更多的道路细节,操作繁琐。
发明内容
本申请实施例的目的是提供一种图片处理方法和装置,能够解决用户针对截图的缩放不能查看更细致的内容或者不能浏览更广的区域的问题。
第一方面,本申请实施例提供了一种图片处理方法,该方法包括:
接收用户对第一图片的第一输入,所述第一输入用于对第一图片的至 少部分放大或缩小;
响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片;
或者,在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
第二方面,本申请实施例提供了一种图片处理装置,该装置包括:
接收模块,用于接收用户对第一图片的第一输入,所述第一输入用于对第一图片的至少部分放大或缩小;
生成模块,用于响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片;
显示模块,用于显示所述第二图片;
控制模块,用于在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,当电子设备接收到用户对第一图片的至少部分放 大或缩小的输入,会在第一图片包括有关联图片的情况下,根据该关联图片和第一图片,生成第二图片并显示该第二图片,从而当用户对该第一图片进行放大或缩小操作时,便可通过所生成的第二图片查看更多的局部地图细节、或查看全局地图路线,操作更加便捷。
附图说明
图1是本申请实施例提供的图片处理方法的流程图;
图2是本申请实施例提供的电子设备的显示界面示意图之一;
图3a是本申请实施例提供的电子设备的显示界面示意图之二;
图3b是本申请实施例提供的电子设备的显示界面示意图之三;
图4是本申请实施例提供的电子设备的显示界面示意图之四;
图5是本申请实施例提供的电子设备的显示界面示意图之五;
图6是本申请提供的一个例子的图片处理方法的流程图;
图7是本申请提供的另一例子的图片处理方法的流程图;
图8是本申请实施例提供的图片处理装置的流程图;
图9是本申请实施例提供的电子设备的结构示意图;
图10是本申请另一实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里 图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图片处理方法进行详细地说明。
请参看图1,其是本申请实施例提供的一种图片处理方法的流程图。该方法可以应用于电子设备中,该电子设备可以为手机、平板电脑、笔记本电脑等。如图1所示,该方法可以包括步骤1100~步骤1300,以下予以详细说明。
步骤1100,接收用户对第一图片的第一输入,第一输入用于对第一图片的至少部分放大或缩小。
在一种可选地实施例中,第一图片为对电子设备的显示屏幕所显示的第一界面进行截屏所生成的截图图片,该第一图片位于相册中。例如,电子设备的显示屏幕所显示的是用户通过地图应用进行导航的第一界面,该第一界面包括用户通过地图应用进行导航的导航线路,则该第一图片为包括导航线路的截图图片,当用户行进过程中出现网络情况不好的情况下,用户可以对该第一图片进行放大或缩小进行查看。
可以理解的,第一输入可以为对第一图片的至少部分放大的输入,例如,该第一输入可以是拇指和食指实施的划开输入。第一输入也可以为对第一图片的至少部分缩小的输入,例如,该第一输入可以是拇指和食指实施的聚拢输入。
本实施例中,本公开实施例的图片处理方法还包括如下步骤2100~步骤2300:
步骤2100,在显示有第一界面的情况下,接收用户的第二输入,第二输入用于对第一界面截屏。
第二输入为对第一界面进行截屏的截屏输入。本实施例中,用户可以通过物理组合键、状态栏快捷键、第一界面快捷功能按钮等方式触发该第二输入。
步骤2200,响应于所述第二输入,生成所述第一图片。
本步骤2200中,电子设备在接收到用户对第一界面的截图输入时,便可响应于用户对第一界面的截图输入,生成第一图片。例如,用户可以通过以上物理组合键、状态栏快捷键、显示界面快捷功能按钮等方式触发截图输入,以控制电子设备进行截图生成第一图片。
示例性地,用户通过地图应用的搜索控件输入起始位置“A站”和终点位置“C站”并点击搜索之后,地图应用便可提供起始位置“A站”和终点位置“C站”的导航路线。在此,用户可以通过电子设备的物理组合键输入控制指令以控制电子设备进行截图,进而生成包含该导航路线的第一图片,该第一图片可以如图2所示。
步骤2300,在第一图片满足预设条件的情况下,生成与第一图片关联的关联图片。
本实施例中,电子设备在生成第一图片的情况下,还会检测第一图片是否满足预设条件,并在第一图片满足预设条件的情况下,生成与第一图片关联的关联图片,该关联图片可以是图3a和图3b所示的图片,并且,该关联图片在电子设备中是以隐藏文件的形式进行存储,即,该关联图片在电子设备中不显示,以避免用户直接浏览该关联图片。
可以理解的,第一图片的第一区域与关联图片的第二区域相对应。其中,第一图片的第一区域为导航线路所在区域,关联图片的第二区域同样为导航线路所在区域,也就是说,第一图片的第一区域与第二图片的第二区域所指示的地图的范围是相同的。
示例性地,图2所示第一图片中的左上角A点、右下角B点、左下角C点和右下角D点所构成的方框区域即“A站”和终点位置“C站”的 导航路线区域为第一区域。图3a所示关联图片中的左上角A1点、右上角B1点、左下角C1点和右下角D1点所构成的方框区域即“A站”和终点位置“C站”的导航路线区域为第二区域。图3b所示关联图片中的左上角A2点、右上角B2点、左下角C2点和右下角D2点所构成的方框区域即“A站”和终点位置“C站”的导航路线区域为第二区域。从图2和图3a可以看出,第一图片的第一区域与关联图片的第二区域所指示的地图的范围是相同的,例如,第一区域最左边边界是在C站后200m处,第一区域最右边边界是在A站后500m处,则第二区域最左边边界也是在C站后200m处,第二区域最右边边界也是在A站后500m处。同理,从图2和图3b可以看出,第一图片的第一区域与关联图片的第二区域所指示的地图的范围也是相同的。
在一种可选地实施例中,关联图片可以包括第一关联图片和第二关联图片,并且,所述第一图片、所述第一关联图片和所述第二关联图片中均包括有第一元素。其中,第一关联图片中第一元素的数量大于第一图片中第一元素的数量,第二关联图片中第一元素的数量小于第一图片中第一元素的数量。如上示例,该第一关联图片可以是图3a所示的关联图片,该第二关联图片可以是图3b所示的关联图片。
以上第一元素可以是根据实际应用场景和实际需求设置的元素。例如对于包括地铁导航路线的第一图片来说,第一元素可以是地铁导航路线上的地铁站点,第一图片中的第一元素的数量可以是第一图片中地铁导航路线上的地铁站点的数量。则第一关联图片中第一元素的数量可以是第一关联图片中地铁导航路线上的地铁站点的数量。第二关联图片中第一元素的数量可以是第二关联图片中地铁导航路线上的地铁站点的数量。
示例性地,图2所示第一图片中地铁导航路线上的地铁站点包括“A站”、“B站”和“C站”,则第一图片中的第一元素的数量为3。同理可以看出,图3a所示第一关联图片中的第一元素的数量为11,图3b所 示第二关联图片中的第一元素的数量为2。
本实施例中,本步骤2300中在第一图片满足预设条件的情况下,生成与第一图片关联的关联图片可以进一步包括:在第一图片中的第一元素的数量为第一数量的情况下,执行目标操作,目标操作包括以下至少一项:放大第一界面以生成第一关联图片,缩小第一界面以生成第二关联图片;其中,第一关联图片中第一元素的数量大于第一数量,第二关联图片中第一元素的数量小于第一数量。
在一个可选地实施例中,电子设备执行目标操作中的一个还是两个,与第一数量的大小有关。例如,在第一数量处于第一范围内的情况下,可以既生成第一关联图片,又生成第二关联图片。又例如,在第一数量处于第二范围内的情况下,则可以仅生成第一关联图片。再例如在第一数量处于第三范围内的情况下,则可以仅生成第二关联图片。并且,以上第一范围、第二范围和第三范围之间不重合,具体可以是第二范围的最小值大于第一范围的最大值,第三范围的最小值小于第二范围的最大值。并且,第一范围、第二范围和第三范围可以是根据实际应用场景和实际需求设置的数值。
能够理解的,在第一图片中的第一元素的数量很少的情况下,可以只生成第一关联图片,在第一图片中的第一元素的数量很多的情况下,可以只生成第二关联图片,在第一图片中的第一元素数量适中的情况下,可以即生成第一关联图片,又生成第二关联图片。
具体的,电子设备在生成第一图片的情况下,还会检测第一图片中的第一元素的数量,并在第一图片中的第一元素的数量处于第一范围的情况下,放大第一界面以生成第一关联图片,并且,缩小第一界面以生成第二关联图片。
示例性地,电子设备生成如图2所示的第一图片的情况下,还会在检测到第一图片中的第一元素的数量为3(该数量3位于第一范围内),则 会放大第一界面以生成第一关联图片,缩小第一界面以生成第二图片。该第一关联图片可以是如图3a所示的关联图片,该第一关联图片包括第二区域,并且,该第一关联图片中包括更丰富的道路细节,即包括“A站”、“B站”、C站”、A1站”、A2站”、A3站”、A4站”、A5站”、“B1站”、“B2站”、B3站”,即,第一关联图片中的第一元素的数量为11,大于第一数量3。该第二关联图片可以是如图3b所示的关联图片,该第二关联图片中包括第二区域,并且,该第二关联图片中涉及更广的区域范围,例如包括“C站”的全景,同时,第二关联图片中的第一元素的数量为2,小于第一数量3。
可以理解的是,在生成第一图片和关联图片之后,电子设备还会在第一图片的属性信息中,写入第一图片与关联图片之间的坐标映射关系。
示例性地,如图2和图3a所示,记录第一图片的第一区域的坐标,即左上角A点的坐标、右上角B点的坐标、左下角C点的坐标、右下角D点的坐标,以及第一关联图片的第二区域的坐标,即左上角A1点的坐标、右上角B1点的坐标、左下角C1点的坐标、右下角D1点的坐标,并建立A点与A1点,B点与B1点,C点与C1点,D点与D1点之间的映射关系。能够理解的,左上角A、右上角B、左下角C、右下角D所构成的方框区域和左上角A1、右上角B1、左下角C1、右下角D1所构成的方框区域指向的地图中的范围是相同的,只不过是地图界面中的比例尺有所不同,从而导致方框区域的面积存在一定的差异。
同理,示例性地,如图2和图3b所示,记录第一图片的第一区域的坐标,即左上角A点的坐标、右上角B点的坐标、左下角C点的坐标、右下角D点的坐标,以及第二关联图片的第二区域的坐标即左上角A2点的坐标、右上角B2点的坐标、左下角C2点的坐标、右下角D2点的坐标,并建立A点与A2点,B点与B2点,C点与C2点,D点与D2点之间的映射关系。能够理解的,左上角A、左下角B、右上角C、右下角D所构 成的方框区域和左上角A2、右上角B2、左下角C2、右下角D2所构成的方框区域指向的地图中的范围是相同的,只不过是地图界面中的比例尺有所不同,从而导致方框区域的面积存在一定的差异。
根据以上步骤2100~步骤2300,当用户发起截图时,电子设备不仅会生成当前的截图图片,还会放大第一界面以生成第一关联图片,缩小第一界面以生成第二关联图片,从而通过该第一关联图片和第二关联图片便可查看更细致丰富的道路细节,或浏览更广的地图区域范围。
在接收用户对第一图片的第一输入之后,还包括:
步骤1200,响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片。
在一个可选地实施例中,本步骤1200中根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片可以进一步包括如下步骤1210a~步骤1230a:
步骤1210a,获取所述第一图片的第一区域以及关联图片的第二区域;其中,所述第一区域和所述第二区域对应。
示例1,当第一输入为对图2所示第一图片的至少部分放大的输入的情况下,电子设备会将图2所示第一图片的第一区域替换为图3a所示第一关联图片的第二区域。其中,图2所示第一图片的左上角A点、右上角B点、左下角C点、右下角D点所构成的方框区域为第一区域,图3a所示第一关联图片的左上角A1点、右上角B1点、左下角C1点和右下角D1点所构成的方框区域为第二区域,在此,电子设备会获取图2所示第一图片的第一区域和图3a所示第一关联图片的第二区域。
示例2,当第一输入为对图2所示第一图片的至少部分缩小的输入的情况下,电子设备会将图1所示第一图片的第一区域替换为图3b所示第二关联图片的第二区域。其中,图2所示第一图片的左上角A点、右上角 B、左下角C点、右下角D点所构成的方框区域为第一区域,图3b所示第二关联图片中的左上角A2点、右上角B2点、左下角C2点和右下角D2点所构成的方框区域为第二区域,在此,电子设备会获取图2所示第一图片的第一区域和图3b所示第二关联图片的第二区域。
步骤1220a,将所述第一图片的第一区域替换为第二区域,以生成所述第二图片。
本步骤1220a中,在获取第一图片的第一区域和关联图片的第二区域之后,电子设备会将第一区域替换为第二区域,以生成第二图片。
可以理解的是,根据上述分析可知,第一区域和第二区域由于比例尺不同,会导致指示相同地图范围的面积有所差异,在此,在将第一区域替换为第二区域的情况下,可以是先将第二区域对应的尺寸进行一定程度的放大或缩小,以使调整后的第二区域对应的尺寸与第一区域对应的图片尺寸匹配。这样,调整后的第二区域能够与第一图片中的其他区域(除第一区域以外的区域)完整组合,形成新的第二图片。
继续上述示例1,电子设备在获取图2所示第一图片的第一区域和图3a所示关联图片的第二区域之后,便可将图2所示第一图片的第一区域替换为图3a所示关联图片的第二区域,得到如图4所示的第二图片,该第二图片中包括更多的道路细节。
继续上述示例2,电子设备在获取图2所示第一图片的第一区域和图3b所示关联图片的第二区域之后,便可将图2所示第一图片的第一区域替换为图3b所示关联图片的第二区域,得到图5所示的第二图片,该第二图片中包括更广的地图区域。
步骤1230a,显示第二图片。
本实施例中,在生成第二图片之后,便可在电子设备的显示屏幕显示该第二图片。
根据以上步骤1210a~步骤1230a,其会获取第一图片的第一区域以及 关联图片的第二区域,由于第一区域和第二区域对应,则可以将第一图片的第一区域替换为第二区域以生成第二图片,进而通过第二图片查看更多的局部地图细节、或查看全局地图路线,操作更加便捷。
在一个可选地实施例中,关联图片包括以上第一关联图片和以上第二关联图片,如上所述,第一图片、第一关联图片和第二关联图片中均包括有第一元素。并且,第一关联图片中第一元素的数量大于第一图片中第一元素的数量,第二关联图片中第一元素的数量小于第一图片中第一元素的数量。本步骤1200中根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片可以进一步包括如下步骤1210b~步骤1220b:
步骤1210b,在所述第一输入为对所述第一图片的至少部分放大的输入的情况下,根据所述第一关联图片和所述第一图片,生成所述第二图片,或者,在所述第一输入为对所述第一图片的至少部分缩小的输入的情况下,根据所述第二关联图片和所述第一图片,生成所述第二图片。
本步骤1210b中,由于关联图片包括第一关联图片和第二关联图片,则电子设备会先根据用户的第一输入确定出是需要结合第一关联图片还是第二关联图片生成第二图片,然后根据所选择出的第一关联图片或第二关联图片生成第二图片。
示例1,当用户想通过图1所示的第一图片查看地铁导航线路上的更多地铁站点时,会实施对该第一图片的至少部分放大的输入,如用户的拇指和食指实施的划开输入。电子设备在接收到用户对第一图片的至少部分放大的输入的情况下,由于图3a所示第一关联图片的第一元素的数量大于图1所示第一图片的第一元素的数量,因此,电子设备会根据该第一图片和该第一关联图片生成第二图片。例如将图1所示第一图片的第一区域替换为图3a所示第一关联图片的第二区域,得到图4所示的第二图片,通过该第二图片便可查看更多的地铁站点。
示例2,当用户想通过图1所示的第一图片查看更广的地图区域时, 会实施对该第一图片的至少部分缩小的输入,如用户的拇指和食指实施的聚拢输入。电子设备在接收到用户对第一图片的至少部分缩小的输入的情况下,由于图3b所示第二关联图片的第一元素的数量小于图1所示第一图片的第一元素的数量,因此,电子设备会根据该第一图片和该第二关联图片生成第二图片。例如将图1所示第一图片的第一区域替换为图3b所示第二关联图片的第二区域,得到图5所示的第二图片,通过该第二图片便可查看更广的地图区域。
步骤1220b,显示所述第二图片。
本步骤1220b中,在生成第二图片之后,便可在电子设备的显示屏幕显示该第二图片。
根据以上步骤1210b~步骤1220b,在第一输入为对第一图片的至少部分放大的输入的情况下,电子设备会根据第一关联图片和第一图片,生成第二图片,由于该第一关联图片为对第一界面放大所生成的图片,从而当用户对该第一图片进行放大操作时,便可通过所生成的第二图片查看更丰富的路线细节。在第一输入为对第一图片的至少部分缩小的输入的情况下,电子设备会根据第二关联图片和第一图片,生成第二图片,由于该第二关联图片为对第一界面缩小所生成的图片,从而当用户对该第一图片进行缩小操作时,便可通过所生成的第二图片查看更广的地图区域。
步骤1300,在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
本实施例中,在第一图片不包括有关联图片的情况下,便可直接根据第一输入的输入参数,对第一图片进行放大或缩小。
例如,在第一输入为对第一图片的至少部分放大的输入的情况下,当第一图片不包括有关联图片,电子设备便可获取第一输入的放大倍数,并根据放大倍数对第一图片进行放大。
再例如,在第一输入为对第一图片的至少部分缩小的输入的情况下, 当第一图片不包括有关联图片,电子设备便可获取第一输入的缩小倍数,并根据该缩小倍数对第一图片进行缩小。
根据本申请实施例,当电子设备接收到用户对第一图片的至少部分放大或缩小的输入,会在第一图片包括有关联图片的情况下,根据该关联图片和第一图片,生成第二图片并显示该第二图片,从而当用户对该第一图片进行放大或缩小操作时,便可通过所生成的第二图片可以查看更多的局部地图细节、或查看全局地图路线,操作更加便捷。
接下来以地图应用为例,示出一个例子的图片处理方法,结合图6和图7所示,该图片处理方法包括如下步骤:
步骤601,在显示有第一界面的情况下,接收用户对导航界面的截屏输入。
步骤602,响应于用户对导航界面的截屏输入,生成第一图片。
步骤603,获取第一图片中的第一元素的数量。
步骤604,在第一元素的数量位于第一范围内的情况下,放大导航界面以生成第一关联图片、及缩小导航界面以生成第二关联图片。
步骤605,在第一元素的数量位于第二范围内的情况下,缩小导航界面以生成第二关联图片。
步骤606,在第一元素的数量位于第三范围内的情况下,放大导航界面以生成第一关联图片。
步骤607,在第一图片的属性信息中写入第一图片的第一区域与关联图片的第二区域之间的坐标映射关系。
步骤701,接收用户对第一图片的第一输入。
步骤702,在第一输入为对第一图片的至少部分放大的输入,检测第一图片是否包括有第一关联图片,若包括,执行步骤703,反之,执行步骤704。
步骤703,在第一图片包括有第一关联图片的情况下,根据坐标映射 关系,加载对应的第一关联图片,并将第一图片的第一区域替换为第一关联图片的第二区域,得到并显示第二图片,流程结束。
步骤704,在第一图片不包括有第一关联图片的情况下,则根据第一输入的放大倍数,对第一图片进行放大,流程结束。
步骤705,在第一输入为对第一图片的至少部分缩小的输入,检测第一图片是否包括有第二关联图片,若包括,执行步骤706,反之,执行步骤707。
步骤706,在第一图片包括有第二关联图片的情况下,根据坐标映射关系,加载对应的第二关联图片,并将第一图片的第一区域替换为第二关联图片的第二区域,得到第二图片,并显示第二图片,流程结束。
步骤707,在第一图片不包括有第二关联图片的情况下,则根据第一输入的缩小倍数,对第一图片进行缩小,流程结束。
本申请实施例提供的图片处理方法,执行主体可以为图片处理装置。本申请实施例中以图片处理装置执行图片处理的方法为例,说明本申请实施例提供的图片处理装置。
与上述实施例相对应,参见图8,本申请实施例还提供一种图片处理装置800,该图片处理装置800包括接收模块810、生成模块820、显示模块830和控制模块840。
接收模块810,用于接收用户对第一图片的第一输入,所述第一输入用于对第一图片的至少部分放大或缩小。
生成模块820,用于响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片。
显示模块830,用于显示所述第二图片。
控制模块840,用于在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
在一个实施例中,所述生成模块820,具体用于:获取所述第一图片 的第一区域以及所述关联图片的第二区域;其中,所述第一区域和所述第二区域对应;将所述第一图片的第一区域替换为第二区域,以生成所述第二图片。
所述显示模块,具体用于:
显示所述第二图片。
在一个实施例中,所述关联图片包括第一关联图片和第二关联图片,所述第一图片、所述第一关联图片和所述第二关联图片中均包括有第一元素。
所述生成模块820,具体用于:在第一输入为对第一图片的至少部分放大的输入的情况下,根据所述第一关联图片和所述第一图片,生成所述第二图片,或者,在所述第一输入为对所述第一图片的至少部分缩小的输入的情况下,根据所述第二关联图片和所述第一图片,生成所述第二图片,其中,所述第一关联图片中所述第一元素的数量大于所述第一图片中所述第一元素的数量,所述第二关联图片中所述第一元素的数量小于第一图片中所述第一元素的数量。
所述显示模块830,具体用于:显示所述第二图片。
在一个实施例中,所述接收模块810,还用于在显示有第一界面的情况下,接收用户的第二输入,第二输入用于对第一界面截屏。
所述生成模块820,还用于响应于所述第二输入,生成所述第一图片;以及,在所述第一图片满足预设条件的情况下,生成与所述第一图片关联的关联图片。
其中,所述第一图片中的第一区域与所述关联图片中的第二区域相对应。
在一个实施例中,所述关联图片包括第一关联图片和第二关联图片。
所述生成模块820,具体用于:在所述第一图片中的第一元素的数量为第一数量的情况下,执行目标操作,所述目标操作包括以下至少一项: 放大所述第一界面以生成第一关联图片,缩小所述第一界面以生成第二关联图片。
其中,所述第一关联图片中所述第一元素的数量大于第一数量,所述第二关联图片中所述第一元素的数量小于所述第一数量。
根据本公开实施例,当电子设备接收到用户对第一图片的至少部分放大或缩小的输入,会在第一图片包括有关联图片的情况下,根据该关联图片和第一图片,生成第二图片并显示该第二图片,从而当用户对该第一图片进行放大或缩小操作时,便可通过所生成的第二图片查看更多的局部地图细节、或查看全局地图路线,操作更加便捷。
本申请实施例中的图片处理装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图片处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图片处理装置能够实现图1的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图9所示,本申请实施例还提供一种电子设备900,包括处理器901和存储器902,存储器902上存储有可在所述处理器901上运 行的程序或指令,该程序或指令被处理器901执行时实现上述图片处理方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图10为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。
本领域技术人员可以理解,电子设备1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1000逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器1010,用于接收用户对第一图片的第一输入,所述第一输入用于对第一图片的至少部分放大或缩小;响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片;或者,在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
根据本公开实施例,当电子设备接收到用户对第一图片的至少部分放大或缩小的输入,会在第一图片包括有关联图片的情况下,根据该关联图片和第一图片,生成第二图片并显示该第二图片,从而当用户对该第一图片进行放大或缩小操作时,便可通过所生成的第二图片查看更多的局部地图细节、或查看全局地图路线,操作更加便捷。
可选地,处理器1010,还用于获取所述第一图片的第一区域以及所述关联图片的第二区域;其中,所述第一区域和所述第二区域对应;将所述第一图片的第一区域替换为第二区域,以生成所述第二图片;显示所述第二图片。
可选地,所述关联图片包括第一关联图片和第二关联图片,所述第一图片、所述第一关联图片和所述第二关联图片中均包括有第一元素。处理器1010,还用于在所述第一输入为对所述第一图片的至少部分放大的输入的情况下,根据所述第一关联图片和所述第一图片,生成所述第二图片,或者,在所述第一输入为对所述第一图片的至少部分缩小的输入的情况下,根据所述第二关联图片和所述第一图片,生成所述第二图片,其中,所述第一关联图片中所述第一元素的数量大于所述第一图片中所述第一元素的数量,所述第二关联图片中所述第一元素的数量小于第一图片中所述第一元素的数量;显示所述第二图片。
可选地,处理器1010,还用于在显示有第一界面的情况下,接收用户的第二输入,第二输入用于对第一界面截屏;响应于所述第二输入,生成所述第一图片;在所述第一图片满足预设条件的情况下,生成与所述第一图片关联的关联图片;其中,所述第一图片中的第一区域与所述关联图片中的第二区域相对应。
可选地,所述关联图片包括第一关联图片和第二关联图片。处理器1010,还用于在所述第一图片中的第一元素的数量为第一数量的情况下,执行目标操作,所述目标操作包括以下至少一项:放大所述第一界面以生成第一关联图片,缩小所述第一界面以生成第二关联图片;其中,所述第一关联图片中所述第一元素的数量大于所述第一数量,所述第二关联图片中所述第一元素的数量小于所述第一数量。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)11041和麦克风10042,图形处理器10041 对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072中的至少一种。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器1009可用于存储软件程序以及各种数据。存储器1009可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1009可以包括易失性存储器或非易失性存储器,或者,存储器1009可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1009包括但不限于这些和任意其它适合类型的存储器。
处理器1010可包括一个或多个处理单元;可选的,处理器1010集成 应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的 顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种图片处理方法,所述方法包括:
    接收用户对第一图片的第一输入,所述第一输入用于对第一图片的至少部分放大或缩小;
    响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片;
    或者,在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
  2. 根据权利要求1所述的方法,其中,所述根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片,包括:
    获取所述第一图片的第一区域以及所述关联图片的第二区域;其中,所述第一区域和所述第二区域对应;
    将所述第一图片的第一区域替换为第二区域,以生成所述第二图片;
    显示所述第二图片。
  3. 根据权利要求1所述的方法,其中,所述关联图片包括第一关联图片和第二关联图片,所述第一图片、所述第一关联图片和所述第二关联图片中均包括有第一元素,
    所述根据所述关联图片和所述第一图片,生成第二图片并显示所述第二图片,包括:
    在所述第一输入为对所述第一图片的至少部分放大的输入的情况下,根据所述第一关联图片和所述第一图片,生成所述第二图片,或者,在所述第一输入为对所述第一图片的至少部分缩小的输入的情况下,根据所述第二关联图片和所述第一图片,生成所述第二图片,其中,所述第一关联图片中所述第一元素的数量大于所述第一图片中所述第一元素的数量,所 述第二关联图片中所述第一元素的数量小于第一图片中所述第一元素的数量;
    显示所述第二图片。
  4. 根据权利要求1所述的方法,其中,在所述接收用户对第一图片的第一输入之前,包括:
    在显示有第一界面的情况下,接收用户的第二输入,第二输入用于对第一界面截屏;
    响应于所述第二输入,生成所述第一图片;
    在所述第一图片满足预设条件的情况下,生成与所述第一图片关联的关联图片;
    其中,所述第一图片中的第一区域与所述关联图片中的第二区域相对应。
  5. 根据权利要求4所述的方法,其中,所述关联图片包括第一关联图片和第二关联图片,
    所述在所述第一图片满足预设条件的情况下,生成与所述第一图片关联的关联图片,包括:
    在所述第一图片中的第一元素的数量为第一数量的情况下,执行目标操作,所述目标操作包括以下至少一项:放大所述第一界面以生成第一关联图片,缩小所述第一界面以生成第二关联图片;
    其中,所述第一关联图片中所述第一元素的数量大于所述第一数量,所述第二关联图片中所述第一元素的数量小于所述第一数量。
  6. 一种图片处理装置,所述装置包括:
    接收模块,用于接收用户对第一图片的第一输入,所述第一输入用于对第一图片的至少部分放大或缩小;
    生成模块,用于响应于所述第一输入,在所述第一图片包括有关联图片的情况下,根据所述关联图片和所述第一图片,生成第二图片;
    显示模块,用于显示所述第二图片;
    控制模块,用于在第一图片不包括有关联图片的情况下,根据所述第一输入的输入参数,对所述第一图片进行放大或缩小。
  7. 根据权利要求6所述的装置,其中,所述生成模块,具体用于:
    获取所述第一图片的第一区域以及所述关联图片的第二区域;其中,所述第一区域和所述第二区域对应;
    将所述第一图片的第一区域替换为第二区域,以生成所述第二图片;
    所述显示模块,具体用于:
    显示所述第二图片。
  8. 根据权利要求6所述的装置,其中,所述关联图片包括第一关联图片和第二关联图片,所述第一图片、所述第一关联图片和所述第二关联图片中均包括有第一元素,
    所述生成模块,具体用于:
    在所述第一输入为对所述第一图片的至少部分放大的输入的情况下,根据所述第一关联图片和所述第一图片,生成所述第二图片,或者,在所述第一输入为对所述第一图片的至少部分缩小的输入的情况下,根据所述第二关联图片和所述第一图片,生成所述第二图片,其中,所述第一关联图片中所述第一元素的数量大于所述第一图片中所述第一元素的数量,所述第二关联图片中所述第一元素的数量小于第一图片中所述第一元素的数量;
    所述显示模块,具体用于:
    显示所述第二图片。
  9. 根据权利要求6所述的装置,其中,
    所述接收模块,还用于在显示有第一界面的情况下,接收用户的第二输入,第二输入用于对第一界面截屏;
    所述生成模块,还用于响应于所述第二输入,生成所述第一图片;以及,
    在所述第一图片满足预设条件的情况下,生成与所述第一图片关联的关联图片;
    其中,所述第一图片中的第一区域与所述关联图片中的第二区域相对应。
  10. 根据权利要求9所述的装置,其中,所述关联图片包括第一关联图片和第二关联图片,
    所述生成模块,具体用于:
    在所述第一图片中的第一元素的数量为第一数量的情况下,执行目标操作,所述目标操作包括以下至少一项:
    放大所述第一界面以生成第一关联图片,缩小所述第一界面以生成第二关联图片;
    其中,所述第一关联图片中所述第一元素的数量大于第一数量,所述第二关联图片中所述第一元素的数量小于所述第一数量。
  11. 一种电子设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-5任一项所述图片处理方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5任一项所述图片处理方法的步骤。
  13. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至5任一 项所述图片处理方法的步骤。
  14. 一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如权利要求1至5任一项所述图片处理方法的步骤。
  15. 一种图片处理装置,包括所述装置被配置成用于执行如权利要求1-5任一项所述图片处理方法。
PCT/CN2023/072145 2022-01-18 2023-01-13 图片处理方法和装置 WO2023138509A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210058534.4A CN114491309A (zh) 2022-01-18 2022-01-18 图片处理方法和装置
CN202210058534.4 2022-01-18

Publications (1)

Publication Number Publication Date
WO2023138509A1 true WO2023138509A1 (zh) 2023-07-27

Family

ID=81472922

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072145 WO2023138509A1 (zh) 2022-01-18 2023-01-13 图片处理方法和装置

Country Status (2)

Country Link
CN (1) CN114491309A (zh)
WO (1) WO2023138509A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491309A (zh) * 2022-01-18 2022-05-13 维沃移动通信有限公司 图片处理方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100290677A1 (en) * 2009-05-13 2010-11-18 John Kwan Facial and/or Body Recognition with Improved Accuracy
US9471834B1 (en) * 2013-11-22 2016-10-18 Google Inc. System and method for updating map views
CN113325990A (zh) * 2020-02-28 2021-08-31 李庆成 智能终端图片的处理方法
CN113808181A (zh) * 2020-10-30 2021-12-17 上海联影智能医疗科技有限公司 医学图像的处理方法、电子设备和存储介质
CN113900606A (zh) * 2021-08-26 2022-01-07 北京城市网邻信息技术有限公司 一种信息展示方法、设备及存储介质
CN114491309A (zh) * 2022-01-18 2022-05-13 维沃移动通信有限公司 图片处理方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080023344A1 (en) * 2006-07-26 2008-01-31 Macor James J Collectable display panel and data storage device
WO2015006912A1 (en) * 2013-07-16 2015-01-22 Nokia Corporation Methods, apparatuses, and computer program products for hiding access to information in an image
CN106791400B (zh) * 2016-12-23 2019-08-20 维沃移动通信有限公司 一种图像显示方法及移动终端
CN107507159A (zh) * 2017-08-10 2017-12-22 珠海市魅族科技有限公司 图片处理方法及装置、计算机装置及可读存储介质
CN112585939B (zh) * 2019-12-31 2023-11-17 深圳市大疆创新科技有限公司 一种图像处理方法、控制方法、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100290677A1 (en) * 2009-05-13 2010-11-18 John Kwan Facial and/or Body Recognition with Improved Accuracy
US9471834B1 (en) * 2013-11-22 2016-10-18 Google Inc. System and method for updating map views
CN113325990A (zh) * 2020-02-28 2021-08-31 李庆成 智能终端图片的处理方法
CN113808181A (zh) * 2020-10-30 2021-12-17 上海联影智能医疗科技有限公司 医学图像的处理方法、电子设备和存储介质
CN113900606A (zh) * 2021-08-26 2022-01-07 北京城市网邻信息技术有限公司 一种信息展示方法、设备及存储介质
CN114491309A (zh) * 2022-01-18 2022-05-13 维沃移动通信有限公司 图片处理方法和装置

Also Published As

Publication number Publication date
CN114491309A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
US10579187B2 (en) Display control apparatus, display control method and display control program
US11099712B2 (en) Device, method, and graphical user interface for navigating and displaying content in context
US9432322B2 (en) Electronic sticky note system, information processing terminal, method for processing electronic sticky note, medium storing program, and data structure of electronic sticky note
US9239625B2 (en) Mobile terminal and control method thereof
WO2021136136A1 (zh) 截图方法及电子设备
EP2677501A2 (en) Apparatus and method for changing images in electronic device
JP2008250948A (ja) 情報処理装置、情報処理方法、情報処理プログラム、情報処理プログラムを記録した記憶媒体、並びに情報表示装置
TW201415347A (zh) 縮放螢幕畫面的方法、電子裝置及電腦程式產品
KR20150095540A (ko) 사용자 단말 장치 및 이의 디스플레이 방법
WO2023138509A1 (zh) 图片处理方法和装置
JP2013222458A (ja) ユーザデータを入力及び管理できる電子装置及び方法
CN115373555A (zh) 文件夹图标的显示方法、装置、电子设备及介质
CN113407144B (zh) 显示控制方法、装置
WO2024088216A1 (zh) 内容显示方法、装置、电子设备及可读存储介质
WO2024104079A1 (zh) 桌面组件生成方法、装置、电子设备和可读存储介质
WO2024109635A1 (zh) 界面显示方法及其装置
WO2024114516A1 (zh) 信息显示方法、装置、电子设备和存储介质
CN112765500A (zh) 信息搜索方法及装置
WO2023185701A1 (zh) 一种显示方法及其装置、电子设备和可读存储介质
WO2023155858A1 (zh) 文档编辑方法及其装置
WO2023093661A1 (zh) 界面控制方法、装置、电子设备及存储介质
CN116107531A (zh) 界面显示方法和装置
WO2022194211A1 (zh) 图像处理方法、装置、电子设备及可读存储介质
CN115437736A (zh) 一种笔记记录方法和装置
CN114416269A (zh) 界面显示方法和显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23742824

Country of ref document: EP

Kind code of ref document: A1