CN110825286B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110825286B
CN110825286B CN201911051899.9A CN201911051899A CN110825286B CN 110825286 B CN110825286 B CN 110825286B CN 201911051899 A CN201911051899 A CN 201911051899A CN 110825286 B CN110825286 B CN 110825286B
Authority
CN
China
Prior art keywords
image
preset
reducing
target object
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911051899.9A
Other languages
Chinese (zh)
Other versions
CN110825286A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911051899.9A priority Critical patent/CN110825286B/en
Publication of CN110825286A publication Critical patent/CN110825286A/en
Application granted granted Critical
Publication of CN110825286B publication Critical patent/CN110825286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an image processing method and device and electronic equipment. One embodiment of the method comprises: acquiring a second image in response to detecting a first preset operation executed by a user for a currently displayed first image, wherein the first image is an image obtained by reducing the area of a target object in the second image to a preset size; covering the first image with a third image obtained by reducing the second image; an animated transition operation is performed that animates the third image to the second image, the animated transition operation including at least one of a translation, a magnification, and a rotation. This embodiment can avoid flicker caused by directly switching the first image to the second image.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
In some scenarios, in order to ensure the transmission speed and save the storage space, the image displayed in the Application (App) client may be a reduced image obtained by reducing a partial area included in the initial image. When the user performs an operation such as clicking on the reduced image, an instruction to view the initial image is issued. The initial image may be re-acquired according to an instruction of a user to view the initial image, and the displayed reduced image may be switched to the initial image.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides an image processing method and device and an electronic device, which can avoid flicker caused by directly switching a first image into a second image.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a second image in response to detecting a first preset operation executed by a user for a currently displayed first image, wherein the first image is an image obtained by reducing the area of a target object in the second image to a preset size; covering the first image with a third image obtained by reducing the second image; an animated transition operation is performed that animates the third image to the second image, the animated transition operation including at least one of a translation, a magnification, and a rotation.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for responding to the detection of a first preset operation executed by a user aiming at a currently displayed first image, and acquiring a second image, wherein the first image is an image obtained by reducing the area of a target object in the second image to a preset size; an overlay unit for overlaying the first image with a third image obtained by reducing the second image; and the execution unit is used for executing the animation transition operation of the animation transition of the third image to the second image, and the animation transition operation comprises at least one of translation, amplification and rotation.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the image processing method according to the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the image processing method according to the first aspect.
According to the image processing method and device and the electronic device provided by the embodiment of the disclosure, when a first preset operation executed by a user for a currently displayed first image is detected, a second image can be acquired, then, a third image obtained by reducing the second image can be used for covering the first image, and further, an animation transition operation for performing animation transition of the third image to the second image can be executed. Because the size difference between the third image and the first image is small, and the third image is gradually transited to the second image, the flicker caused by directly switching the first image to the second image can be avoided.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram for one embodiment of an image processing method according to the present disclosure;
FIG. 2 is a schematic diagram of an application scenario of an image processing method according to the present disclosure;
FIG. 3 is a flow diagram of yet another embodiment of an image processing method according to the present disclosure;
FIG. 4 is a schematic block diagram of one embodiment of an image processing apparatus according to the present disclosure;
FIG. 5 is an exemplary system architecture to which the image processing method of one embodiment of the present disclosure may be applied;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow of one embodiment of an image processing method according to the present disclosure is shown. The image processing method as shown in fig. 1 includes the steps of:
step 101, in response to detecting a first preset operation executed by a user for a currently displayed first image, acquiring a second image.
In this embodiment, in response to detecting a first preset operation performed by a user with respect to a currently displayed first image, an electronic device performing an image processing method may acquire a second image from a server locally or communicatively connected.
The first preset operation may be a click, a long press, or a slide operation performed on the first image by the user.
The first image may be an image obtained by reducing a region of the target object in the second image to a preset size. Here, the target object may be an arbitrary object specified in advance. For example, the target object may be a human face, a plant, an animal, etc. Taking the target object as a face for example, the second image may be an image containing a face. The first image may be an image obtained by reducing the area of the face in the second image.
In some alternative implementations, the first image may be obtained by the following steps.
First, the electronic device that generates the first image may crop the region where the target object is located from the second image, resulting in the target image.
Then, the electronic device generating the first image may reduce the target image to a preset size, resulting in the first image.
The electronic device that generates the first image may be an electronic device that executes the image processing method, or may be another electronic device that is communicatively connected to the electronic device that executes the image processing method (for example, a server that is communicatively connected to the electronic device that executes the image processing method).
In these optional implementation manners, the region where the target object is located may be cut from the second image according to actual requirements, so as to obtain the first image.
Step 102, the first image is overlaid with a third image obtained by reducing the second image.
In this embodiment, after acquiring the second image, the electronic device executing the image processing method may reduce the second image to obtain a third image. In practice, the reduction ratio of the second image may be determined according to actual requirements. Further, the electronic device performing the image processing method may overlay a third image on the first image. It will be appreciated that the third image is overlaid on top of the currently displayed first image and that the display of the third image may be effected.
And 103, performing animation transition operation for performing animation transition on the third image to the second image.
In the present embodiment, after displaying the third image, the electronic device executing the image processing method may perform an animation transition operation of animating the third image to the second image. Here, the animated transition may be a dynamic process in which the third image gradually transitions to the second image.
The animated transition operation may include at least one of translating, zooming, and rotating the third image. As analyzed above, the third image is the reduced image of the second image. It will be appreciated that the conversion of the third image into the second image may be achieved by translating, magnifying and rotating the third image.
Specifically, the electronic device that executes the image processing method may set in advance parameters (for example, parameters for translating, enlarging, and rotating the third image) that realize the animated transition of the third image to the second image. Therefore, the electronic device executing the image processing method may perform at least one of translation, magnification and rotation on the third image using the preset parameters, thereby completing the animated transition from the third image to the second image.
In some alternative implementations, the electronic device executing the image processing method may perform step 103 in the following manner.
Step L1, a matrix of pixel values of the third image and a matrix of pixel values of the second image are determined.
The pixel value matrix may include pixel values of each pixel point in the image. It is to be understood that the pixel value matrix of the third image may include pixel values of respective pixel points in the third image, and the pixel value matrix of the second image may include pixel values of respective pixel points in the second image.
And L2, determining parameters required for the animation transition of the third image at the current position to the second image at the preset position according to the pixel value matrix of the third image, the pixel value matrix of the second image, the preset time length and the preset time interval. The parameters of the animated transition of the third image to the second image may include parameters of at least one of translation, magnification and rotation of the third image.
The preset time length may be a preset time length for the animation of the third image to transition to the second image. In practice, the third image needs to be processed for a plurality of times to realize the animation transition from the third image to the second image. Accordingly, the preset time interval may be a preset time interval of processing the third image twice in adjacent. It will be appreciated that each processing of the third image may include at least one of translating, magnifying, and rotating the third image.
Specifically, the electronic device executing the image processing method may determine, according to the pixel value matrix of the third image, the pixel value matrix of the second image, the preset time duration and the preset time interval, a parameter for processing the third image at every preset time interval within the preset time duration. It is understood that the determined parameters for processing the third image for a plurality of times are parameters required for animating the transition of the third image at the current position to the second image at the preset position.
Step L3, animating the third image at the current position to the second image at the preset position according to the determined parameters.
Specifically, the electronic device performing the image processing method may perform at least one of translation, magnification, and rotation on the third image according to the parameter of each preset time interval until the third image at the current position transitions to the second image at the preset position. It should be appreciated that animating the transition of the third image to the second image may enable the display of the second image.
In these optional implementation manners, parameters for translating, amplifying and rotating the third image at each preset time interval are determined, and then the third image is translated, amplified and rotated through the parameters at each preset time interval, so that the third image can be smoothly transited to the second image within a preset time period, and further the visual effect is improved.
In some alternative implementations, after step 103, the electronic device executing the image processing method may perform the following steps.
In response to detecting a second preset operation performed by the user with respect to the second image at the preset position, animating the second image at the preset position to a third image at the current position, step S1.
The second preset operation may be an operation performed by the user on the second image, such as clicking, long-pressing, or sliding, or may be a closing operation performed by the user on the page where the second image is located.
Specifically, the electronic device executing the image processing method may determine parameters for translating, reducing and rotating the second image at preset time intervals within a preset time duration in a manner similar to step L2. Then, in a manner similar to step L3, at least one of translation, reduction and rotation of the second image may be performed according to the parameters of each preset time interval until the second image at the preset position transitions to the third image at the current position.
It will be appreciated that the animated transition of the second image to the third image may effect a transition from the display of the second image to the display of the third image, with the third image overlaying the first image.
In step S2, the page where the third image is located is closed to display the first image.
After the second image is transited to the third image, the electronic device executing the image processing method may close the page where the third image is located, and further display the first image.
In these optional implementations, after detecting a second preset operation performed by the user, if the second image is directly switched to the first image, because a size difference between the first image and the second image is large, flickering usually occurs in the switching process, thereby causing poor visual effect of the user. Therefore, the second image animation is firstly transited to the third image with smaller size difference with the first image, and then the first image is displayed, so that the flicker caused by directly switching the second image into the first image can be avoided, and the user experience is further improved.
Referring to fig. 2, an application scenario of the image processing method according to the embodiment of the present disclosure is shown. As shown in fig. 2, the terminal device 201 currently displays a first image 202. The first image 202 is an image obtained by reducing the area 2031 where the face included in the second image 203 is located to a preset size.
In response to detecting a first preset operation performed by the user with respect to the first image 202, the terminal device 201 may acquire the second image 203. After that, the terminal device 201 may overlay the currently displayed first image 202 with the third image 204 obtained by reducing the second image 203. Further, the terminal device 201 may perform at least one of translation, magnification and rotation on the third image 204 using the preset parameters, thereby completing the animated transition from the third image 204 to the second image 203.
In this embodiment, when a user views a second image on the basis of a currently displayed first image, if the first image is directly switched to the second image, because a size difference between the first image and the second image is large, flickering usually occurs in a switching process, and thus a visual effect of the user is poor. Therefore, the second image is firstly reduced to obtain the third image, the currently displayed first image is covered by the third image, and then the third image is in animation transition to the second image, so that flickering caused by directly switching the first image into the second image can be avoided, and further the user experience is improved.
With continued reference to fig. 3, a flow of yet another embodiment of an image processing method according to the present disclosure is shown. As shown in fig. 3, the image processing method includes the steps of:
step 301, in response to detecting a first preset operation performed by a user for a currently displayed first image, acquiring a second image.
The above steps 301 may be performed in a similar manner as the step 101 in the embodiment shown in fig. 1, and the above description for the step 101 also applies to the step 301, which is not described herein again.
Step 302, position information of a region of the target object in the second image is acquired.
In this embodiment, after acquiring the second image, the electronic device executing the image processing method may acquire the position information of the region of the target object in the second image from a server locally or communicatively connected.
The above-mentioned position information may be information indicating a position of a region in which the target object is located in the second image. For example, the position information may be a distance between a boundary of a region where the target object is located and a boundary of the second image. For example, if the upper boundary, the lower boundary, the left boundary and the right boundary of the region in which the target object is located are a, b, c and d, respectively, and the upper boundary, the lower boundary, the left boundary and the right boundary of the second image are A, B, C, D, respectively, the position information may include: the distance between boundary a and boundary a, the distance between boundary B and boundary B, the distance between boundary C and boundary C, and the distance between boundary D and boundary D. For another example, the position information may be a ratio of a distance between a boundary of the region where the target object is located and a boundary of the second image to a width or height of the second image. Continuing with the above example, where the width and height of the second image are w, h, respectively, then the position information may include: the ratio of the distance between boundary a and boundary a to h, the ratio of the distance between boundary B and boundary B to h, the ratio of the distance between boundary C and boundary C to w, and the ratio of the distance between boundary D and boundary D to w. For another example, the position information may be coordinates of each vertex of the region in which the target object is located.
Step 303, reducing the second image to a size that is the same as the size of the first image, and obtaining a third image.
In this embodiment, after acquiring the position information, the electronic device executing the image processing method may reduce the second image to a size where the area indicated by the position information is the same as the first image, resulting in a third image. It is understood that the area of the target object in the third image is the same size as the first image.
Step 304, placing a third image on top of the first image, and causing a region of the target object in the third image to coincide with the first image.
In this embodiment, after obtaining the third image, the electronic device executing the image processing method may place the third image on top of the first image, and make the region of the target object in the third image coincide with the first image. It will be understood that, at this time, the third image covers the first image, and the region of the target object in the third image covers the first image.
Step 305, performing an animation transition operation for animating the third image to the second image.
The above steps 305 may be performed in a similar manner as the steps 103 in the embodiment shown in fig. 1, and the above description for the steps 103 also applies to the steps 305, which is not described herein again.
In this embodiment, since the area of the target object in the third image is the same as the size of the first image, the third image is placed on the first image, and the area of the target object in the third image is overlapped with the first image, so that the third image can completely cover the first image, and the visual effect of the user is further improved. The second image is firstly reduced to obtain a third image, the third image is used for covering the currently displayed first image, then the third image is in animation transition to the second image, flicker caused by directly switching the first image into the second image can be avoided, and user experience is further improved.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an image processing apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable in various electronic devices.
As shown in fig. 4, the image processing apparatus of the present embodiment includes: an acquisition unit 401, a covering unit 402 and an execution unit 403. Wherein, the obtaining unit 401 is configured to: and acquiring a second image in response to the detection of a first preset operation executed by a user on the currently displayed first image, wherein the first image is an image obtained by reducing the area of the target object in the second image to a preset size. The covering unit 402 is configured to: the first image is overlaid with a third image obtained by reducing the second image. Execution unit 403 is to: performing an animated transition operation that animates the third image to the second image, the animated transition operation including at least one of a translation, a magnification, and a rotation.
In this embodiment, specific processing of the obtaining unit 401, the covering unit 402, and the executing unit 403 of the image processing apparatus and technical effects thereof can refer to related descriptions of step 101, step 102, and step 103 in the corresponding embodiment of fig. 1, which are not described herein again.
In some optional implementations, the covering unit 402 may include: an acquisition subunit (not shown), a reduction subunit (not shown), and a placement subunit (not shown). Wherein the obtaining subunit is operable to: position information of a region of the target object in the second image is acquired. The reduction subunit may be for: and reducing the second image to the area indicated by the position information and the size of the first image to obtain a third image. The placement subunit may be for: the third image is placed over the first image and the region of the target object in the third image is made to coincide with the first image.
In some optional implementations, the execution unit 403 may include: a first determining subunit (not shown), a second determining subunit (not shown), and a transition subunit (not shown). Wherein the first determining subunit is operable to: and determining a pixel value matrix of the third image and a pixel value matrix of the second image, wherein the pixel value matrix comprises the pixel values of all pixel points in the images. The second determining subunit may be to: and determining parameters required for the animation transition of the third image at the current position to the second image at the preset position according to the pixel value matrix of the third image, the pixel value matrix of the second image, the preset time length and the preset time interval, wherein the parameters comprise parameters for performing at least one of translation, amplification and rotation. The transition subunit may be for: and according to the determined parameters, the third image at the current position is animated and transited to the second image at the preset position.
In some optional implementations, the image processing apparatus may further include: a transition unit (not shown in the figure) and a display unit (not shown in the figure). Wherein the transition unit may be configured to: and in response to detecting a second preset operation performed by the user on the second image in the preset position, animating the second image in the preset position to a third image in the current position. The display unit may be configured to: and closing the page where the third image is located to display the first image.
In some optional implementations, the image processing apparatus may further include a first image acquisition unit configured to: cutting the area where the target object is located from the second image to obtain a target image; and reducing the target image to a preset size to obtain a first image.
With further reference to fig. 5, fig. 5 illustrates an exemplary system architecture to which the image processing method of one embodiment of the present disclosure may be applied.
As shown in fig. 5, the system architecture may include terminal devices 501, 502, a network 503, and a server 504. The network 503 is the medium used to provide communication links between the terminal devices 501, 502 and the server 504. Network 503 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 501, 502 may interact with a server 504 via a network 503 to receive or send messages or the like. The terminal devices 501, 502 may have various client applications installed thereon, such as a social application, a web browser application, a search application, and a news application. After detecting the first preset operation performed by the user, the client application in the terminal device 501, 502 may overlay the currently displayed first image with the reduced third image of the second image, and animate the third image to the second image.
The terminal devices 501 and 502 may be hardware or software. When the terminal devices 501, 502 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal devices 501 and 502 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 504 may be a server that provides various services. For example, the server 504 may store the second image in advance, and therefore, the terminal apparatuses 501 and 502 may acquire the second image from the server 504 after detecting the first preset operation performed by the user.
The server 504 may be hardware or software. When the server 504 is hardware, it can be implemented as a distributed server cluster composed of a plurality of servers, or as a single server. When the server 504 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be executed by the terminal devices 501 and 502, and accordingly, the image processing apparatus may be provided in the terminal devices 501 and 502.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 6, shown is a schematic diagram of an electronic device (e.g., the terminal device of fig. 5) suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a second image in response to detecting a first preset operation executed by a user for a currently displayed first image, wherein the first image is an image obtained by reducing the area of a target object in the second image to a preset size; covering the first image with a third image obtained by reducing the second image; an animated transition operation is performed that animates the third image to the second image, the animated transition operation including at least one of a translation, a magnification, and a rotation.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Here, the name of the cell does not in some cases constitute a limitation on the cell itself, and for example, the overlay cell may also be described as a "cell that overlays the first image with a third image obtained by reducing the second image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents is encompassed without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An image processing method, comprising:
acquiring a second image in response to detecting a first preset operation executed by a user for a currently displayed first image, wherein the first image is an image obtained by reducing the area of a target object in the second image to a preset size;
covering the first image with a third image obtained by reducing the second image;
performing an animation transition operation of animating the third image at the current position to the second image at the preset position, the animation transition operation including at least one of translation, magnification and rotation;
the method further comprises the following steps: in response to detecting a second preset operation performed by the user on the second image at the preset position, animating the second image at the preset position to a third image at the current position; and closing the page where the third image is located to display the first image.
2. The method of claim 1, wherein overlaying the first image with a third image resulting from reducing the second image comprises:
acquiring position information of a region of the target object in the second image;
reducing the second image to a size of an area indicated by the position information, which is the same as that of the first image, to obtain a third image;
placing the third image over the first image, and causing a region of the target object in the third image to coincide with the first image.
3. The method of claim 1, wherein performing an animated transition operation that animates the third image in a current position to the second image in a preset position comprises:
determining a pixel value matrix of the third image and a pixel value matrix of the second image, wherein the pixel value matrix comprises pixel values of all pixel points in the images;
determining parameters required for the animation transition of the third image at the current position to the second image at the preset position according to the pixel value matrix of the third image, the pixel value matrix of the second image, the preset time length and the preset time interval, wherein the parameters comprise parameters for performing at least one of translation, amplification and rotation;
and according to the determined parameters, the third image at the current position is transited to the second image at the preset position in an animation mode.
4. A method according to any one of claims 1 to 3, wherein the first image is obtained by:
cutting the area where the target object is located from the second image to obtain a target image;
and reducing the target image to the preset size to obtain the first image.
5. An image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for responding to the detection of a first preset operation executed by a user aiming at a currently displayed first image, and acquiring a second image, wherein the first image is an image obtained by reducing the area of a target object in the second image to a preset size;
an overlaying unit configured to overlay the first image with a third image obtained by reducing the second image;
an execution unit, configured to execute an animation transition operation for performing an animation transition operation of transitioning the third image at the current position to the second image at a preset position, where the animation transition operation includes at least one of translation, magnification, and rotation;
the device further comprises: a transition unit, configured to, in response to detecting a second preset operation performed by the user with respect to the second image at the preset position, animate the second image at the preset position to a third image at the current position; and the display unit is used for closing the page where the third image is located so as to display the first image.
6. The apparatus of claim 5, wherein the covering unit comprises:
an acquisition subunit, configured to acquire position information of a region of the target object in the second image;
a reducing subunit, configured to reduce the second image to a size that is the same as that of the first image in the area indicated by the location information, so as to obtain the third image;
a placing subunit, configured to place the third image on top of the first image, and to make a region of the target object in the third image coincide with the first image.
7. The apparatus of claim 5, wherein the execution unit comprises:
a first determining subunit, configured to determine a pixel value matrix of the third image and a pixel value matrix of the second image, where the pixel value matrices include pixel values of each pixel point in an image;
a second determining subunit, configured to determine, according to the pixel value matrix of the third image, the pixel value matrix of the second image, a preset time duration, and a preset time interval, a parameter required for transitioning the third image animation at the current position to the second image animation at the preset position, where the parameter includes a parameter for performing at least one of translation, amplification, and rotation;
and the transition subunit is used for transitioning the third image animation at the current position to the second image animation at the preset position according to the determined parameters.
8. The apparatus according to any of claims 5-7, further comprising a first image acquisition unit configured to:
cutting the area where the target object is located from the second image to obtain a target image;
and reducing the target image to the preset size to obtain the first image.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN201911051899.9A 2019-10-30 2019-10-30 Image processing method and device and electronic equipment Active CN110825286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911051899.9A CN110825286B (en) 2019-10-30 2019-10-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911051899.9A CN110825286B (en) 2019-10-30 2019-10-30 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110825286A CN110825286A (en) 2020-02-21
CN110825286B true CN110825286B (en) 2021-09-03

Family

ID=69551690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911051899.9A Active CN110825286B (en) 2019-10-30 2019-10-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110825286B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053286B (en) * 2020-09-04 2023-09-05 抖音视界有限公司 Image processing method, device, electronic equipment and readable medium
JP7463263B2 (en) * 2020-12-10 2024-04-08 株式会社日立ビルシステム REMOTE MAINTENANCE OPERATION MANAGEMENT SERVER AND REMOTE MAINTENANCE OPERATION MANAGEMENT METHOD
CN112612389B (en) * 2020-12-25 2022-06-10 珠海金山网络游戏科技有限公司 Display system and method for object in sliding window
CN112764845B (en) * 2020-12-30 2022-09-16 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7408559B2 (en) * 2005-12-01 2008-08-05 Hewlett-Packard Development Company, L.P. Upscaling of anti-aliased graphical elements
CN109427085A (en) * 2017-08-29 2019-03-05 阿里巴巴集团控股有限公司 A kind of processing of image data, rendering method, server and client
CN110083285A (en) * 2011-08-01 2019-08-02 索尼公司 Information processing unit, information processing method and program
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6396941B1 (en) * 1996-08-23 2002-05-28 Bacus Research Laboratories, Inc. Method and apparatus for internet, intranet, and local viewing of virtual microscope slides
JP4669912B2 (en) * 2005-07-08 2011-04-13 株式会社リコー Content browsing system, program, and content browsing method
KR100727974B1 (en) * 2005-09-10 2007-06-14 삼성전자주식회사 Method and apparatus for making thumbnails out of digital image
CN101192230A (en) * 2006-11-30 2008-06-04 重庆优腾信息技术有限公司 Method and device for opening and closing picture-browsing window
CN101192129B (en) * 2006-11-30 2012-05-30 重庆优腾信息技术有限公司 Table top background control method and device
US20140096026A1 (en) * 2012-09-28 2014-04-03 Interactive Memories, Inc. Methods for Establishing Simulated Force Dynamics Between Two or More Digital Assets Displayed in an Electronic Interface
JP6097535B2 (en) * 2012-11-29 2017-03-15 キヤノン株式会社 Image forming apparatus, control method therefor, and program
CN104731480B (en) * 2015-03-31 2020-03-17 努比亚技术有限公司 Image display method and device based on touch screen
CN104991725B (en) * 2015-07-28 2018-02-23 北京金山安全软件有限公司 Picture clipping method and system
CN106484242B (en) * 2016-09-19 2019-09-20 北京京东尚科信息技术有限公司 The information display method and device at interface
CN110084204B (en) * 2019-04-29 2020-11-24 北京字节跳动网络技术有限公司 Image processing method and device based on target object posture and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7408559B2 (en) * 2005-12-01 2008-08-05 Hewlett-Packard Development Company, L.P. Upscaling of anti-aliased graphical elements
CN110083285A (en) * 2011-08-01 2019-08-02 索尼公司 Information processing unit, information processing method and program
CN109427085A (en) * 2017-08-29 2019-03-05 阿里巴巴集团控股有限公司 A kind of processing of image data, rendering method, server and client
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal

Also Published As

Publication number Publication date
CN110825286A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110825286B (en) Image processing method and device and electronic equipment
CN110784754A (en) Video display method and device and electronic equipment
CN111399956B (en) Content display method and device applied to display equipment and electronic equipment
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN111459364B (en) Icon updating method and device and electronic equipment
CN111291244B (en) House source information display method, device, terminal and storage medium
CN111190520A (en) Menu item selection method and device, readable medium and electronic equipment
CN110298851B (en) Training method and device for human body segmentation neural network
CN110852946A (en) Picture display method and device and electronic equipment
CN114064593A (en) Document sharing method, device, equipment and medium
CN114722320A (en) Page switching method and device and interaction method of terminal equipment
CN114417782A (en) Display method and device and electronic equipment
CN111259291B (en) View display method and device and electronic equipment
CN112258622A (en) Image processing method, image processing device, readable medium and electronic equipment
CN111324405A (en) Character display method and device and electronic equipment
CN111273884A (en) Image display method and device and electronic equipment
CN112256221A (en) Information display method and device and electronic equipment
CN110825993B (en) Picture display method and device and electronic equipment
CN115576458A (en) Application window display method, device, equipment and medium
CN111338827B (en) Method and device for pasting form data and electronic equipment
CN114066722A (en) Method and device for acquiring image and electronic equipment
CN114399696A (en) Target detection method and device, storage medium and electronic equipment
CN113066166A (en) Image processing method and device and electronic equipment
CN111835917A (en) Method, device and equipment for showing activity range and computer readable medium
CN111770385A (en) Card display method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant