CN113296661A - Image processing method and device, electronic equipment and readable storage medium - Google Patents
Image processing method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN113296661A CN113296661A CN202110293020.2A CN202110293020A CN113296661A CN 113296661 A CN113296661 A CN 113296661A CN 202110293020 A CN202110293020 A CN 202110293020A CN 113296661 A CN113296661 A CN 113296661A
- Authority
- CN
- China
- Prior art keywords
- image
- content
- target
- displayed
- screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 30
- 230000004044 response Effects 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims description 80
- 238000004891 communication Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an image processing method and device, electronic equipment and a readable storage medium, belongs to the technical field of communication, and can solve the problems of complex operation and inconvenience when long screenshots are browsed. The method comprises the following steps: under the condition that a first image is displayed, receiving a first input of a target area in the first image, wherein a third image corresponding to the target area is a partial image in the first image, and the third image comprises target content in the target area; displaying a second image in response to the first input; wherein the second image includes target content; in the case of displaying the second image, the target content in the second image does not exceed the display range of the screen. The method and the device are applied to the scene that the user browses the long screenshot on the electronic equipment.
Description
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an image processing method and device, an electronic device and a readable storage medium.
Background
With the development of electronic device technology, the frequency of using the electronic device by a user is higher and higher, and when the user wants to save the display content on the current screen, the user can use the screenshot function of the electronic device.
In the related art, a user may save contents displayed on a screen through a screenshot function of an electronic device. Meanwhile, the user can also realize the interception operation of more contents in a larger page in one-time screen interception operation through the long screenshot function.
However, when a user browses a long screenshot with more contents, many contents in the long screenshot cannot be completely displayed on the screen due to the limitation of the display range of the screen. If the user wants to browse more contents, multiple times of translation operations are usually required, and the operation process is complicated.
Disclosure of Invention
An embodiment of the application aims to provide an image processing method, an image processing device, an electronic device and a readable storage medium, which can solve the problems of complex and inconvenient operation when long screenshots are browsed.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: under the condition that a first image is displayed, receiving a first input of a target area in the first image, wherein a third image corresponding to the target area is a partial image in the first image, and the third image comprises target content in the target area; displaying a second image in response to the first input; wherein the second image includes target content; in the case of displaying the second image, the target content in the second image does not exceed the display range of the screen.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including: the device comprises a receiving module and a display module; the receiving module is used for receiving a first input of a target area in a first image under the condition that the first image is displayed, wherein a third image corresponding to the target area is a partial image in the first image, and the third image comprises target content in the target area; a display module for displaying a second image in response to the first input received by the receiving module; wherein the second image includes target content; in the case of displaying the second image, the target content in the second image does not exceed the display range of the screen.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, in the case of displaying the first image, if a first input to the target area by the user is received, the manner of displaying the second image including the image content of the target area in the first image may enable the electronic device to completely display the content of the target area in the first image, and in the case of displaying the second image, the content in the target area in the second image does not exceed the display range of the screen, so that the user may clearly browse the content that the user wants to browse in the long screenshot without performing other operations on the electronic device.
Drawings
Fig. 1 is a schematic long-shot view provided in an embodiment of the present application;
fig. 2 is one of schematic diagrams of an interface applied by an image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 4 is a second schematic diagram of an interface applied by an image processing method according to the embodiment of the present application;
fig. 5 is a third schematic diagram of an interface applied by an image processing method according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the application can be applied to a scene that a user browses a long screenshot on electronic equipment.
Illustratively, for a scene in which a user browses a long screenshot on an electronic device, in the related art, as shown in fig. 1, the long screenshot is a long screenshot, and when the user browses the long screenshot on the electronic device, in general, the electronic device displays the long screenshot in a display manner as shown in fig. 2 (a). In this display mode, the content of the long shot is usually zoomed, and if the user wants to see clearly the content in the long shot, the user needs to enlarge the area that the user wants to view by double-click input or double-finger sliding input, so that the electronic device can display the image in the display mode as shown in fig. 2 (B). Meanwhile, for a long screenshot containing table contents, after the image is enlarged, the screenshot is limited by the size of the screen, the contents in the same row or the same column cannot be completely displayed on the screen, and a user needs to browse the contents which are not displayed in a sliding mode, so that the operation is complicated.
In view of the problem, in the technical scheme provided by the embodiment of the application, under the condition that the long screenshot is displayed, the electronic device can cut and enlarge the content of the target area through the first input of the user to the target area, so that the user can conveniently view the content. Meanwhile, if the cut and amplified image has part of content which is not displayed, the electronic equipment can also segment the image and rearrange the segmented image, so that a user can clearly browse all the content in the target area.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 3, an image processing method provided in an embodiment of the present application may include the following steps 201 and 202:
in step 201, in a case where a first image is displayed, an image processing apparatus receives a first input to a target area in the first image.
And the third image corresponding to the target area is a partial image in the first image, and the third image comprises target content in the target area.
Illustratively, the first image may be a long screenshot captured by the user, a long screenshot sent by another electronic device, or a long image in a webpage or an application browsed by the user. When the user views the first image on the electronic device, the electronic device may display the first image in a display manner as shown in fig. 2 (a).
It is understood that the first image may be a picture which can be displayed on the screen.
For example, the third image may be an intermediate image of the image processing apparatus in generating the second image, and the image processing apparatus may first acquire the third image of the target area of the first image and obtain the second image after processing the third image.
For example, the first input may be an input to the target area, for example, the first input may be a selection operation to the target area. Specifically, the image corresponding to the circled partial area is the third image, and the target area is an area that the user wants to view in the first image. The target area may be an area of a preset size, or an area determined by the user through the first input or other inputs.
In one possible implementation, the first input may include a first sub-input and a second sub-input. The target area may be determined by displaying a selection frame with a preset size after the image processing apparatus receives the first sub-input of the user, and then adjusting the size and the position of the selection frame through the second sub-input.
For example, as shown in fig. 4 (a), after the image processing apparatus receives a press input (i.e., the first sub-input) from the user, a selection frame 41 may be displayed, and the user may adjust a frame selection range of the selection frame. Thereafter, as shown in fig. 4 (B), the image processing apparatus displays a third image corresponding to the selection frame, the third image including the content in the selection frame.
And when the second image is displayed, the content in the target area in the second image does not exceed the display range of the screen.
For example, the image processing apparatus may perform the target operation after responding to the first input described above, thereby generating the second image.
Illustratively, the above-described target operation may be a cropping operation of the first image by the image processing apparatus, and/or a scaling (reduction and enlargement) operation. Further, the target operation may further include a segmentation operation of the third image by the image processing apparatus, and after the segmentation operation, the target operation may further include a stitching operation of the segmented images.
For example, the second image may be an image of the third image that is cropped and/or zoomed by the image processing device, or may be an image of the third image that is segmented and re-stitched by the image processing device.
Illustratively, the image processing device generates the second image after performing the target operation on the third image, and can completely display the target content of the target area in the case of displaying the second image.
In this way, in the case of displaying the first image, if a first input to the target area by the user is received, the second image including the content in the target area is displayed, and in the case of displaying the second image, the content in the target area in the second image does not exceed the display range of the screen, so that the electronic device can completely display the content in the target area in the first image, and the user can clearly browse the content which is desired to be browsed in the long screenshot on the electronic device without performing other operations.
Optionally, in order to enable the image processing apparatus to display the second image without the target content in the second image exceeding the display range of the screen, the image processing apparatus may display the second image according to a preset display mode.
For example, the target content includes text content, and before the step 202, the image processing method provided in the embodiment of the present application may further include the following step 202 a:
step 202a, the image processing apparatus adjusts the third image to be displayed in the first size, and if the target content in the third image displayed in the first size does not exceed the display range of the screen, determines the adjusted third image as the second image.
The character size of the text content in the second image is larger than or equal to a target character size, and the target character size is the size of the minimum character displayed by an operating system.
It is understood that, in the case where the target content includes text content, the image processing apparatus may adaptively adjust the cropped image (i.e., the third image) to facilitate browsing of the user, and adjust the display size of the third image, so that the size of the text characters in the third image is convenient for the user to browse.
Specifically, the image processing apparatus may determine the character size acceptable for the user during daily use by recording the user's usage habits, for example, by setting the system font, or setting the web page zoom. The minimum character size which can be received by the user can also be determined by acquiring the minimum font which is carried by the electronic equipment system, namely, the minimum character size which can be received by the user is the target character size.
And adjusting the third image to be displayed in a first size, wherein the character size of the character content in the third image is larger than or equal to the character size which can be received by a user, namely the character size of the character content in the third image is larger than or equal to the target character size.
If the target content in the third image after the display size is adjusted does not exceed the display range of the screen, the adjusted third image may be determined as the second image. That is, the character size of the text in the second image is equal to or larger than the target character size.
It can be understood that, in a general case, when the electronic device performs reduction and enlargement on an image, whether the zoomed image affects browsing of a user is not considered, but the image processing method provided in the embodiment of the present application always keeps a character size of text content in the image greater than or equal to a character size set when the user daily uses the electronic device when performing reduction and enlargement on the image.
In this way, the size of the third image is adjusted, so that the third image containing the text content can be conveniently browsed by the user, the user does not need to perform amplification operation, and the browsing efficiency of the user is improved.
In a possible implementation manner, if the first content in the third image displayed in the first size by the image processing apparatus exceeds the display range of the screen, the image processing apparatus may segment the third image displayed in the first size, rejoin the segmented images, and determine the rejoined image as the second image.
For example, the third image is not an image finally displayed, but the third image is adjusted, and the obtained second image is an image finally displayed by the image processing apparatus.
In this embodiment, for some longer/wider screenshots, for example, table contents, normally, even though the image is cut, the contents in a row or a column of the table cannot be completely displayed, and at this time, the image processing apparatus may segment the long screenshots and re-splice the segmented images.
Before the step 202, the image processing method provided in the embodiment of the present application may further include the following step 202 b:
step 202b, under the condition that the first content in the third image exceeds the display range of the screen, the image processing device divides the third image, rejoins the divided images, and determines the rejoined image as the second image.
The target content comprises first content, the segmented image comprises a first sub-image and a second sub-image, and the second sub-image comprises the first content.
Optionally, after the user selects and determines the target area, and after the image processing apparatus performs operations such as cropping and zooming (reducing and enlarging), and a part of content of the third image corresponding to the target area still exceeds the display range of the screen, the image processing apparatus may crop and segment the first content exceeding the display range of the screen, and obtain the second image by means of re-stitching, so that the first content can be displayed in the display range of the screen when the image processing apparatus displays the second image.
As described in the above example, after the third image is adjusted to be displayed in the first size, if the first content in the third image exceeds the display range of the screen, that is, the third image displayed in the first size cannot display all the content, the third image is cropped and divided, and the second image is obtained by re-stitching, so that all the content (target content) in the re-stitched second image can be displayed on the screen.
Specifically, the image processing apparatus divides the third image into a first sub-image and a second sub-image, the second sub-image includes first content, and the first sub-image includes second content of the target content except the first content.
Illustratively, the image processing apparatus is capable of displaying all contents in the second image in a complete manner when displaying the second image obtained after performing the target operation. And, the character size of the content contained in the second image is greater than or equal to the target character size. That is, the first sub-image and the second sub-image may be adaptively resized, and the character size in the first sub-image and the second sub-image is larger than or equal to the target character size.
Illustratively, the image processing apparatus performs the division in accordance with the minimum display unit principle when dividing the third image. Taking the content in the third image as a table as an example, when the third image includes table content, the image processing apparatus does not divide the content in the same cell into two sub-images when dividing the third image, and at this time, the minimum display unit of the third image is one cell.
For example, in the case that the third image is an image containing table content as shown in fig. 5 (a), when the user wants to view the names of all developers (i.e., the content in the area 51), the image processing device cannot completely display the content in the area 51, and at this time, the image processing device may perform image segmentation on the third image and stitch the third image into an image as shown in fig. 5 (B), so that the image processing device can completely display the table content in the user-selected area.
In this way, in the case that a part of the content of the third image exceeds the display range of the screen, the image processing apparatus may segment the third image, re-tile the segmented content, and generate the second image, so that the target capability of the target area can be completely displayed in the display range of the screen in the case that the image processing apparatus displays the second image.
Further alternatively, in this embodiment of the application, the first content may be located in a transverse direction or a longitudinal direction of the third image, and for a case where the content in different directions cannot be completely displayed, the image processing apparatus may perform image segmentation on the third image according to the following principle.
Illustratively, the step 202b may further include the following step 202b 1:
in step 202b1, in case that the first content exceeds the display range of the screen along the first direction, the image processing apparatus splices the first sub-image and the second sub-image along the second direction, and the first direction is perpendicular to the second direction.
It is understood that, in order to enable the image processing apparatus to display the content beyond the screen display range in the target area, the image processing apparatus may perform image segmentation on the content that cannot be displayed on the screen in the target area, and re-stitch the segmented images.
For example, when the first content is located in the horizontal direction of the third image, the image processing apparatus may divide the first content, and stitch the second sub-image obtained by dividing the first content in the vertical direction of the first sub-image, so that the image processing apparatus can display the first content when displaying the second image.
It should be noted that the content that cannot be displayed on the screen in the target area is: the image processing apparatus only cuts and scales a target area in the first image to obtain a third image, and when the third image is displayed, the first content in the third image cannot be displayed on the screen.
For example, taking the target content as the cell content as an example, for a case that the cell content in the horizontal axis direction exceeds the display range of the screen, the image processing apparatus may splice the cell content exceeding the display range of the screen in the horizontal axis direction into the vertical axis direction; for the case where the cell content in the longitudinal axis direction exceeds the display range of the screen, the image processing apparatus may splice the cell content exceeding the display range of the screen in the longitudinal axis direction into the transverse axis direction.
In this way, the image processing apparatus can splice the content that cannot be completely displayed in the target area in the first direction along the second direction, so that the user can completely view the content in the target area on the screen.
According to the image processing method provided by the embodiment of the application, under the condition that the first image is displayed, the electronic equipment can cut and enlarge the content of the target area through the first input of the user to the target area, and the user can conveniently check the content. Meanwhile, if the cut and amplified image has part of content which is not displayed, the electronic device can directly segment the content of the target area of the first image and rearrange the segmented image, so that a user can clearly and completely browse all the content in the target area.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
In the embodiments of the present application, the above-described methods are illustrated in the drawings. The image processing method is exemplarily described with reference to one of the drawings in the embodiments of the present application. In specific implementation, the image processing methods shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
Fig. 6 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application, and as shown in fig. 6, the image processing apparatus 600 includes: a receiving module 601 and a display module 602; a receiving module 601, configured to receive a first input to a target area in a first image when the first image is displayed, where a third image corresponding to the target area is a partial image in the first image, and the third image includes target content in the target area; a display module 602, configured to display a second image in response to the first input received by the receiving module 601; wherein the second image includes target content; in the case of displaying the second image, the target content in the second image does not exceed the display range of the screen.
Optionally, in this embodiment of the present application, the image processing apparatus 600 further includes: a processing module 603; the processing module 603 is configured to, when the first content in the third image exceeds the display range of the screen, segment the third image, rejoin the segmented images, and determine the rejoined image as the second image; the target content comprises first content, the segmented image comprises a first sub-image and a second sub-image, and the second sub-image comprises the first content.
Optionally, in this embodiment of the present application, the target content includes text content; the processing module 603 is configured to adjust the third image to be displayed in the first size, and if the target content in the third image displayed in the first size does not exceed the display range of the screen, determine the adjusted third image as the second image; and the character size of the character content in the second image is larger than or equal to the target character size, and the target character size is the size of the minimum character displayed by the operating system.
Optionally, in this embodiment of the application, the processing module 603 is specifically configured to, if the first content in the third image displayed in the first size exceeds the display range of the screen, segment the third image displayed in the first size, rejoin the segmented images, and determine the rejoined image as the second image.
Optionally, in this embodiment of the application, the processing module 603 is specifically configured to, when the first content exceeds the display range of the screen in the first direction, splice the first sub-image and the second sub-image in the second direction, where the first direction is perpendicular to the second direction.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiments of fig. 2 to fig. 5, and for avoiding repetition, details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
According to the image processing device provided by the embodiment of the application, under the condition that the first image is displayed, the electronic equipment can cut and enlarge the content of the target area through the first input of the user to the target area, and the user can conveniently check the content. Meanwhile, if the cut and amplified image has part of content which is not displayed, the electronic device can directly segment the content of the target area of the first image and rearrange the segmented image, so that a user can clearly and completely browse all the content in the target area.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 110, a memory 109, and a program or an instruction stored in the memory 109 and executable on the processor 110, where the program or the instruction is executed by the processor 110 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 107 is configured to receive a first input to a target area in a first image when the first image is displayed, where a third image corresponding to the target area is a partial image in the first image, and the third image includes target content in the target area; a display unit 106 for displaying a second image in response to a first input received by the user input unit 107; wherein the second image includes the target content; in a case where the second image is displayed, the target content in the second image does not exceed a display range of a screen.
In this way, in the case of displaying the first image, if a first input to the target area by the user is received, the second image including the image content of the target area in the first image is displayed, so that the electronic device can completely display the content of the target area in the first image, and in the case of displaying the second image, the content in the target area in the second image does not exceed the display range of the screen, so that the user can clearly browse the content that the user wants to browse in the long screenshot on the electronic device without performing other operations.
Optionally, in this embodiment of the present application, the image processing apparatus 600 further includes: a processor 110; a processor 110, configured to segment the third image and re-tile the segmented images if the first content in the third image exceeds the display range of the screen, and determine the re-stitched image as a second image; wherein the target content comprises the first content, the segmented image comprises a first sub-image and a second sub-image, and the second sub-image comprises the first content.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to, if the first content in the third image displayed in the first size exceeds the display range of the screen, segment the third image displayed in the first size, rejoin the segmented images, and determine the rejoined image as the second image.
In this way, the image processing apparatus can display the target content in the target area completely by adjusting the size and the character size of the third image, so that the second image obtained by adjusting the third image does not affect the browsing of the image content by the user.
Optionally, in this embodiment of the present application, the target content includes text content; a processor 110, configured to adjust the third image to be displayed in a first size, and if target content in the third image displayed in the first size does not exceed a display range of the screen, determine the adjusted third image as a second image; the character size of the text content in the second image is larger than or equal to a target character size, and the target character size is the size of the minimum character displayed by an operating system.
In this way, in the case that a part of the content of the third image exceeds the display range of the screen, the image processing apparatus may segment the third image, re-tile the segmented content, and generate the second image, so that the target capability of the target area can be completely displayed in the display range of the screen in the case that the image processing apparatus displays the second image.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to, in a case that the first content exceeds the display range of the screen in a first direction, splice the first sub-image and the second sub-image in a second direction, where the first direction is perpendicular to the second direction.
In this way, the image processing apparatus can splice the content that cannot be completely displayed in the target area in the first direction along the second direction, so that the user can completely view the content in the target area on the screen.
According to the electronic equipment provided by the embodiment of the application, under the condition that the first image is displayed, the electronic equipment can cut and amplify the content of the target area through the first input of the user to the target area, and the user can conveniently check the content. Meanwhile, if the cut and amplified image has part of content which is not displayed, the electronic device can directly segment the content of the target area of the first image and rearrange the segmented image, so that a user can clearly and completely browse all the content in the target area.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. An image processing method, characterized in that the method comprises:
receiving a first input of a target area in a first image under the condition that the first image is displayed, wherein a third image corresponding to the target area is a partial image in the first image, and the third image comprises target content in the target area;
displaying a second image in response to the first input;
wherein the second image includes the target content; in a case where the second image is displayed, the target content in the second image does not exceed a display range of a screen.
2. The method of claim 1, wherein prior to displaying the second image, further comprising:
under the condition that the first content in the third image exceeds the display range of the screen, segmenting the third image, re-splicing the segmented images, and determining the re-spliced image as a second image;
wherein the target content comprises the first content, the segmented image comprises a first sub-image and a second sub-image, and the second sub-image comprises the first content.
3. The method of claim 1, wherein the target content comprises textual content; before the displaying the second image, the method further comprises:
adjusting the third image to be displayed in a first size, and if the target content in the third image displayed in the first size does not exceed the display range of the screen, determining the adjusted third image as a second image;
the character size of the text content in the second image is larger than or equal to a target character size, and the target character size is the size of the minimum character displayed by an operating system.
4. The method of claim 3,
and if the first content in the third image displayed in the first size exceeds the display range of the screen, segmenting the third image displayed in the first size, re-splicing the segmented images, and determining the re-spliced image as a second image.
5. The method of claim 2, wherein said re-stitching the segmented images comprises:
and under the condition that the first content exceeds the display range of the screen along a first direction, splicing the first sub-image and the second sub-image along a second direction, wherein the first direction is vertical to the second direction.
6. An image processing apparatus, characterized in that the apparatus comprises: the device comprises a receiving module and a display module;
the receiving module is configured to receive a first input to a target area in a first image when the first image is displayed, where a third image corresponding to the target area is a partial image in the first image, and the third image includes target content in the target area;
the display module is used for responding to the first input received by the receiving module and displaying a second image;
wherein the second image includes the target content; in a case where the second image is displayed, the target content in the second image does not exceed a display range of a screen.
7. The apparatus of claim 6, further comprising: a processing module;
the processing module is used for segmenting the third image under the condition that the first content in the third image exceeds the display range of the screen, splicing the segmented images again, and determining the spliced images as second images;
wherein the target content comprises the first content, the segmented image comprises a first sub-image and a second sub-image, and the second sub-image comprises the first content.
8. The apparatus of claim 6, wherein the target content comprises textual content; the device further comprises: a processing module;
the processing module is configured to adjust the third image to be displayed in a first size, and if the target content in the third image displayed in the first size does not exceed the display range of the screen, determine the adjusted third image as a second image;
the character size of the text content in the second image is larger than or equal to a target character size, and the target character size is the size of the minimum character displayed by an operating system.
9. The apparatus of claim 8,
the processing module is specifically configured to, if the first content in the third image displayed in the first size exceeds the display range of the screen, segment the third image displayed in the first size, re-splice the segmented images, and determine the re-spliced image as the second image.
10. The apparatus of claim 7,
the processing module is specifically configured to splice the first sub-image and the second sub-image along a second direction when the first content exceeds the display range of the screen along the first direction, where the first direction is perpendicular to the second direction.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110293020.2A CN113296661B (en) | 2021-03-18 | 2021-03-18 | Image processing method, device, electronic equipment and readable storage medium |
PCT/CN2022/081208 WO2022194211A1 (en) | 2021-03-18 | 2022-03-16 | Image processing method and apparatus, electronic device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110293020.2A CN113296661B (en) | 2021-03-18 | 2021-03-18 | Image processing method, device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113296661A true CN113296661A (en) | 2021-08-24 |
CN113296661B CN113296661B (en) | 2023-10-27 |
Family
ID=77319197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110293020.2A Active CN113296661B (en) | 2021-03-18 | 2021-03-18 | Image processing method, device, electronic equipment and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113296661B (en) |
WO (1) | WO2022194211A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114327730A (en) * | 2021-12-31 | 2022-04-12 | 维沃移动通信有限公司 | Image display method and electronic device |
WO2022194211A1 (en) * | 2021-03-18 | 2022-09-22 | 维沃移动通信有限公司 | Image processing method and apparatus, electronic device and readable storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116503255A (en) * | 2023-05-16 | 2023-07-28 | 立臻科技(昆山)有限公司 | Long screenshot generation method and device, electronic equipment and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105843494A (en) * | 2015-01-15 | 2016-08-10 | 中兴通讯股份有限公司 | Method and device for realizing region screen capture, and terminal |
CN109460177A (en) * | 2018-09-27 | 2019-03-12 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
US20190087137A1 (en) * | 2017-09-15 | 2019-03-21 | Brother Kogyo Kabushiki Kaisha | Recording medium |
CN110231905A (en) * | 2019-05-07 | 2019-09-13 | 华为技术有限公司 | A kind of screenshotss method and electronic equipment |
US20200050349A1 (en) * | 2018-08-07 | 2020-02-13 | Chiun Mai Communication Systems, Inc. | Electronic device and screenshot capturing method |
CN111143013A (en) * | 2019-12-30 | 2020-05-12 | 维沃移动通信有限公司 | Screenshot method and electronic equipment |
CN111641750A (en) * | 2020-05-19 | 2020-09-08 | Oppo广东移动通信有限公司 | Screen capture method, terminal and non-volatile computer-readable storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502524A (en) * | 2016-09-27 | 2017-03-15 | 乐视控股(北京)有限公司 | Screenshotss method and device |
KR101983725B1 (en) * | 2017-08-03 | 2019-09-03 | 엘지전자 주식회사 | Electronic device and method for controlling of the same |
CN112230816B (en) * | 2020-10-23 | 2022-03-18 | 岭东核电有限公司 | High-efficiency screenshot method and device, computer equipment and storage medium |
CN113296661B (en) * | 2021-03-18 | 2023-10-27 | 维沃移动通信有限公司 | Image processing method, device, electronic equipment and readable storage medium |
-
2021
- 2021-03-18 CN CN202110293020.2A patent/CN113296661B/en active Active
-
2022
- 2022-03-16 WO PCT/CN2022/081208 patent/WO2022194211A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105843494A (en) * | 2015-01-15 | 2016-08-10 | 中兴通讯股份有限公司 | Method and device for realizing region screen capture, and terminal |
US20190087137A1 (en) * | 2017-09-15 | 2019-03-21 | Brother Kogyo Kabushiki Kaisha | Recording medium |
US20200050349A1 (en) * | 2018-08-07 | 2020-02-13 | Chiun Mai Communication Systems, Inc. | Electronic device and screenshot capturing method |
CN109460177A (en) * | 2018-09-27 | 2019-03-12 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN110231905A (en) * | 2019-05-07 | 2019-09-13 | 华为技术有限公司 | A kind of screenshotss method and electronic equipment |
CN111143013A (en) * | 2019-12-30 | 2020-05-12 | 维沃移动通信有限公司 | Screenshot method and electronic equipment |
CN111641750A (en) * | 2020-05-19 | 2020-09-08 | Oppo广东移动通信有限公司 | Screen capture method, terminal and non-volatile computer-readable storage medium |
Non-Patent Citations (1)
Title |
---|
郑世珏等: "《CAI课件的制作与网络课程的设计》", 华中师范大学出版社, pages: 71 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022194211A1 (en) * | 2021-03-18 | 2022-09-22 | 维沃移动通信有限公司 | Image processing method and apparatus, electronic device and readable storage medium |
CN114327730A (en) * | 2021-12-31 | 2022-04-12 | 维沃移动通信有限公司 | Image display method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN113296661B (en) | 2023-10-27 |
WO2022194211A1 (en) | 2022-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113296661B (en) | Image processing method, device, electronic equipment and readable storage medium | |
CN112306607B (en) | Screenshot method and device, electronic equipment and readable storage medium | |
WO2023061280A1 (en) | Application program display method and apparatus, and electronic device | |
CN112099714B (en) | Screenshot method and device, electronic equipment and readable storage medium | |
WO2021233291A1 (en) | Screen capture method and apparatus, and electronic device | |
CN112836086A (en) | Video processing method and device and electronic equipment | |
CN112887794A (en) | Video editing method and device | |
CN115454365A (en) | Picture processing method and device, electronic equipment and medium | |
CN116107531A (en) | Interface display method and device | |
CN114116098B (en) | Application icon management method and device, electronic equipment and storage medium | |
CN111813305A (en) | Application program starting method and device | |
CN113407144B (en) | Display control method and device | |
CN112181252B (en) | Screen capturing method and device and electronic equipment | |
CN114741042A (en) | Content display method and device | |
CN112162805B (en) | Screenshot method and device and electronic equipment | |
CN112399010B (en) | Page display method and device and electronic equipment | |
CN112684963A (en) | Screenshot method and device and electronic equipment | |
CN113726953B (en) | Display content acquisition method and device | |
CN115631109A (en) | Image processing method, image processing device and electronic equipment | |
CN115729412A (en) | Interface display method and device | |
CN114860135A (en) | Screenshot method and device | |
CN112202958B (en) | Screenshot method and device and electronic equipment | |
CN115617225A (en) | Application interface display method and device, electronic equipment and storage medium | |
CN114564921A (en) | Document editing method and device | |
CN113157184B (en) | Content display method, device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |