Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant disclosure and are not limiting of the disclosure. It should be noted that, for the convenience of description, only the parts relevant to the related disclosure are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a method for displaying an image or an apparatus for displaying an image to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a shooting application, a video playing application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal apparatuses 101, 102, 103 are hardware, various electronic apparatuses having a display screen may be used. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-described electronic apparatuses. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that provides support for images or videos taken on the terminal devices 101, 102, 103. The background server can receive the image or video sent by the terminal device or send the image or video to the terminal device.
It should be noted that the method for displaying an image provided by the embodiment of the present disclosure is generally performed by the terminal devices 101, 102, 103, and accordingly, the apparatus for displaying an image is generally disposed in the terminal devices 101, 102, 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the image to be magnified does not need to be acquired from a remote location, the system architecture described above may not include a server and a network.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for displaying an image according to the present disclosure is shown. The method for displaying an image includes the steps of:
step 201, acquiring a currently displayed image to be amplified.
In the present embodiment, an execution subject (e.g., a terminal device shown in fig. 1) of the method for displaying an image may acquire an image to be enlarged that is currently displayed. The image to be enlarged may be an image displayed on a screen included in the execution main body, or an image displayed on another electronic device communicatively connected to the execution main body.
As an example, the execution subject may be a smart phone, and the user may use the smart phone to take a self-timer. The image to be magnified may be an image currently displayed on the screen of the smartphone when the user self-photographs.
Step 202, determining a human body image from the image to be amplified.
In this embodiment, the executing body may determine a human body image from the image to be enlarged. Specifically, as an example, the executing body described above may determine a human body image from an image to be enlarged using an existing human body image detection method. For example, the human body image may be detected using an existing object detection model with the human body image as a detection target. The target detection model may be a model established based on an existing target detection network (e.g., ssd (single Shot multi box detector), dpm (deformable Part model), etc.). The object detection model may determine the position of an object image (e.g., a human body image) from the images input thereto. In general, the object detection model may output coordinate information that may be used to characterize the position of the human body image in the image to be magnified. For example, the coordinate information may include coordinates representing two opposite corners of a rectangular frame, and a rectangular area may be determined in the image to be enlarged according to the coordinates of the two opposite corners, where the image covered by the rectangular area is the human body image.
In some optional implementations of the embodiment, the executing subject may determine the human body image from the image to be enlarged according to the following steps:
firstly, human skeleton key point detection is carried out on an image to be amplified to obtain a human skeleton key point information set. Wherein the human skeleton key point information is used for indicating human skeleton key points. The human skeleton key points are points for characterizing specific parts of the human body, such as the vertex, elbow joint, shoulder joint and the like. The human bone key point information may include coordinates in a coordinate system established on the image to be magnified, which may be used to characterize the location of the human bone key points in the image to be magnified.
In this embodiment, the executing body may perform human bone keypoint detection on the image to be magnified according to various existing methods for determining human bone keypoints. For example, the executing body may input the image to be enlarged into a pre-trained Convolutional Neural Network (CNN) to obtain a set of information about key points of the human skeleton. The convolutional neural network may be any of various conventional convolutional neural Networks, such as R-CNN (Region-CNN), STN (Spatial Transform Networks), and the like. It should be noted that the above method for detecting key points of human bones is a well-known technique widely studied and applied at present, and is not described herein again.
Then, based on the human skeleton key point information set, a human body image is determined. Specifically, as an example, the execution subject may determine, as a human body image, an image indicated by a minimum rectangle including human body bone key points respectively represented by the respective pieces of human body bone key point information, from the image to be enlarged.
In some optional implementations of the embodiment, the execution subject may determine, as the human body image, an image representing an upper half of the human body based on the set of human body skeletal keypoint information. Specifically, as an example, each piece of human skeleton key point information has a corresponding serial number, and the serial number may be determined by the execution main body according to the human body part corresponding to the human skeleton key point indicated by each piece of human skeleton key point information when the human skeleton key point information set is detected. The technician may preset a serial number of the human bone key point information representing the upper half of the human body, and the execution agent may determine, as the human body image, a rectangular region (for example, a region included in a minimum rectangle, or a region included in a rectangle obtained by enlarging the minimum rectangle by a predetermined multiple) including the human bone key points indicated by the human bone key point information corresponding to the serial numbers in the image to be enlarged, based on the preset serial number. For example, human skeletal keypoint information characterizing the upper body of a human includes: the human skeleton key information used for indicating the vertex, the human skeleton key information used for indicating two shoulder joints and the human skeleton key information used for indicating two hip joints. The execution main body can determine a rectangular region including human skeleton key points indicated by the human skeleton key point information respectively corresponding to the serial numbers as a human body image according to the serial numbers of the human skeleton key information.
Step 203, determining whether the size of the human body image is smaller than a preset size.
In this embodiment, the execution body may determine whether the size of the human body image is smaller than a preset size. The size of an image is usually characterized by the number of pixels, such as x × y, where x is the number of horizontal pixels and y is the number of vertical pixels. The preset size may be a preset fixed size; alternatively, the preset size may be a size determined according to a preset ratio, for example, assuming that the size of the interface for displaying the image to be enlarged is m × n and the preset ratio is 0.8, the preset size is 0.8m × 0.8 n. It should be noted that, when at least one of the following conditions is satisfied, it may be determined that the size of the human body image is smaller than the preset size: the number of horizontal pixels of the human body image is smaller than that of horizontal pixels of the preset size, the number of longitudinal pixels of the human body image is smaller than that of longitudinal pixels of the preset size, and the number of pixels included in a diagonal line of the human body image is smaller than that of pixels included in a diagonal line of a rectangle represented by the preset size. It should be understood that the above conditions are merely exemplary, and in practice, other conditions may also be included.
And 204, in response to the fact that the size of the image to be amplified is smaller than the preset size, amplifying the image to be amplified so as to enable the human body image to reach the target size.
In this embodiment, the executing body may enlarge the image to be enlarged in response to determining that the size of the human body image is smaller than the preset size, so that the human body image reaches the target size.
The target size may be equal to the preset size or a size determined based on a preset magnification. The execution main body can amplify the image to be amplified, so that the human body image in the image to be amplified is amplified to a target size. It should be noted that, in the process of magnifying an image to be magnified, when at least one of the following conditions is satisfied, it is determined that the human body image reaches the target size: the number of horizontal pixels of the human body image is the same as that of the horizontal pixels of the target size, the number of longitudinal pixels of the human body image is the same as that of the longitudinal pixels of the target size, and the number of pixels included in a diagonal line of the human body image is the same as that of pixels included in a diagonal line of the rectangle characterized by the target size. It should be understood that the above conditions are merely exemplary, and in practice, other conditions may also be included.
In some optional implementation manners of this embodiment, the executing body may determine that the human body image reaches the target size in response to determining that the magnification of the magnified image to be magnified reaches the preset magnification and the size of the magnified human body image is smaller than the preset size. Generally, when an image to be magnified is magnified, the image can be magnified to a preset magnification at maximum. For example, the preset magnification may be a maximum magnification of the camera when the image is taken. When the image to be amplified is amplified to a preset multiple, if the human body image is still smaller than the preset size at the moment, the human body image is determined to reach the target size, and the size of the human body image at the moment is the target size. Through setting up preset multiple, can reduce under the less condition of size of human image, cause the degree that the definition of the human image after the enlargeing reduces.
And step 205, moving the amplified image to be amplified so that the display position of the amplified human body image reaches the target position.
In this embodiment, the executing body may move the enlarged image to be enlarged so that the display position of the enlarged human body image reaches the target position.
Specifically, the target position may be a position set in advance in an interface for displaying an image to be enlarged. For example, the target position may be a preset region in the interface, and the executing body may move the enlarged image to be enlarged so that the enlarged human body image is in the region, and at this time, it is determined that the display position of the enlarged human body image reaches the target position. Alternatively, the target position may be the entire area of the interface, and when the enlarged human body image is in the interface, it is determined that the display position of the enlarged human body image reaches the target position. It should be noted that when all or part of the enlarged human body image (for example, half or other proportion of the area of the enlarged human body image) is in the region indicated by the target position, it may be determined that the display position of the enlarged human body image reaches the target position.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for displaying an image according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 first acquires an image to be enlarged 302 currently displayed on the screen. Then, the terminal device 301 determines a human body image 303 from the image to be enlarged 302. For example, the terminal device 301 determines a rectangular region including the respective human bone key points as the human image 303 by performing human bone key point detection on the image to be enlarged. Then, the terminal device 301 determines that the size of the human body image 303 is smaller than a preset size (for example, the number of vertical pixels of the human body image 303 in a rectangular shape is smaller than the number of vertical pixels of the preset size), and enlarges the image to be enlarged 302 (a dotted line frame 304 in the figure is a sideline of the enlarged image to be enlarged) so as to make the human body image 303 reach a target size, where the target size may be the same as the preset size. When the terminal device 301 enlarges the human body image 303 to the number of vertical pixels of which the number of vertical pixels is equal to the target size, it is determined that the enlarged human body image reaches the target size. Finally, the terminal device moves the magnified image to be magnified to move the magnified human body image to the center (i.e., the target position) of the screen.
According to the method provided by the embodiment of the disclosure, the currently displayed image to be amplified is obtained, the human body image is determined from the image to be amplified, if the size of the human body image is smaller than the preset size, the image to be amplified is amplified to enable the human body image to reach the target size, and finally the amplified image to be amplified is moved to enable the display position of the amplified human body image to reach the target position, so that the human body image can be moved to the target position on the screen in real time without moving the shooting equipment, the automatic tracking of a lens can be simulated, and the flexibility of the display of the person image is improved.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for displaying an image is shown. The flow 400 of the method for displaying an image comprises the steps of:
step 401, acquiring a currently displayed image to be magnified.
In this embodiment, step 401 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
Step 402, determining a human body image from the image to be amplified.
In this embodiment, step 402 is substantially the same as step 202 in the corresponding embodiment of fig. 2, and is not described herein again.
In step 403, it is determined whether the size of the human body image is smaller than a preset size.
In this embodiment, step 403 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again.
And step 404, in response to the fact that the size of the image to be amplified is smaller than the preset size, amplifying the image to be amplified so as to enable the human body image to reach the target size.
In this embodiment, step 404 is substantially the same as step 204 in the corresponding embodiment of fig. 2, and is not described herein again.
In step 405, a human body center point is determined from the magnified human body image.
In the present embodiment, an execution subject (e.g., a terminal device shown in fig. 1) of the method for displaying an image may determine a human body center point from the enlarged human body image. Specifically, the shape of the human body image may be a rectangle, and the center point of the rectangle may be determined as the human body center point.
Optionally, when the human body image is determined according to the human body skeleton key point information set, the human body center point may be determined according to a human body skeleton key point indicated by the pre-specified human body skeleton key point information. For example, the midpoint of a line connecting key points of human bones characterizing the shoulder joints of the human body is determined as the center point of the human body. Alternatively, the intersection of the diagonals of a quadrilateral representing the two shoulder joints and the two hip joints is determined as the center point of the human body.
And 406, moving the image to be amplified so that the center point of the human body moves towards the interface center point of the interface for displaying the image to be amplified.
In this embodiment, the executing body may move the image to be enlarged so that the center point of the human body moves toward the interface center point of the interface for displaying the image to be enlarged. The shape of the interface is generally a rectangle, and the center point of the rectangle is the interface center point.
Step 407, in response to determining that the center point of the human body coincides with the center point of the interface, determining that the display position of the enlarged human body image reaches the target position.
In this embodiment, the executing body may determine that the display position of the enlarged human body image reaches the target position in response to determining that the human body center point coincides with the interface center point. Specifically, when the distance between the human body center point and the interface center point is less than or equal to a preset distance threshold, it is determined that the human body center point coincides with the interface center point, and at this time, the display position of the enlarged human body image reaches a target position, so that the enlarged human body image is centered on the interface.
In some optional implementation manners of this embodiment, after step 407, the executing main body may further determine that the display position of the enlarged human body image reaches the target position in response to determining that the edge of the enlarged image to be enlarged coincides with the edge of the interface. Specifically, when the sideline of the amplified image to be amplified coincides with the sideline of the interface, the amplified image to be amplified cannot move any more, and at this time, it is determined that the display position of the amplified human body image reaches the target position.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for displaying an image in the present embodiment highlights a step of moving the enlarged human body image to the center of the interface for displaying the image to be enlarged according to the human body center point. Therefore, the scheme described in the embodiment can enable the amplified human body image to be displayed in the center of the interface, so that the lens automatic tracking can be simulated more accurately, and the flexibility of the display of the human image is further improved.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for displaying an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for displaying an image of the present embodiment includes: an acquisition unit 501 configured to acquire a currently displayed image to be enlarged; a first determination unit 502 configured to determine a human body image from an image to be enlarged; a second determination unit 503 configured to determine whether the size of the human body image is smaller than a preset size; an enlarging unit 504 configured to enlarge the image to be enlarged so as to bring the human body image to a target size in response to determining that the size is smaller than a preset size; and a moving unit 505 configured to move the enlarged image to be enlarged so that the display position of the enlarged human body image reaches the target position.
In this embodiment, the acquisition unit 501 may acquire the currently displayed image to be enlarged. The image to be enlarged may be an image displayed on a screen included in the apparatus 500, or an image displayed on another electronic device communicatively connected to the apparatus 500.
In this embodiment, the first determination unit 502 may determine a human body image from the above-described image to be enlarged. Specifically, as an example, the above-described first determination unit 502 may determine a human body image from an image to be enlarged using an existing human body image detection method. For example, the human body image may be detected using an existing object detection model with the human body image as a detection target. The target detection model may be a model established based on an existing target detection network (e.g., ssd (single Shot multi boxdetector), dpm (deformable Part model), etc.). The object detection model may determine the position of an object image (e.g., a human body image) from the images input thereto. In general, the object detection model may output coordinate information that may be used to characterize the position of the human body image in the image to be magnified. For example, the coordinate information may include coordinates representing two opposite corners of a rectangular frame, and a rectangular area may be determined in the image to be enlarged according to the coordinates of the two opposite corners, where the image covered by the rectangular area is the human body image.
In this embodiment, the second determination unit 503 may determine whether the size of the human body image is smaller than a preset size. The size of an image is usually characterized by the number of pixels, such as x × y, where x is the number of horizontal pixels and y is the number of vertical pixels. The preset size may be a preset fixed size; alternatively, the preset size may be a size determined according to a preset ratio, for example, assuming that the size of the interface for displaying the image to be enlarged is m × n and the preset ratio is 0.8, the preset size is 0.8m × 0.8 n. It should be noted that, when at least one of the following conditions is satisfied, it may be determined that the size of the human body image is smaller than the preset size: the number of horizontal pixels of the human body image is smaller than that of horizontal pixels of the preset size, the number of longitudinal pixels of the human body image is smaller than that of longitudinal pixels of the preset size, and the number of pixels included in a diagonal line of the human body image is smaller than that of pixels included in a diagonal line of a rectangle represented by the preset size. It should be understood that the above conditions are merely exemplary, and in practice, other conditions may also be included.
In this embodiment, the enlarging unit 504 may enlarge the image to be enlarged to make the human body image reach the target size in response to determining that the size of the human body image is smaller than the preset size.
The target size may be equal to the preset size or a size determined based on a preset magnification. The enlarging unit 504 may enlarge the image to be enlarged so that the human body image therein is enlarged to a target size. It should be noted that, in the process of magnifying an image to be magnified, when at least one of the following conditions is satisfied, it is determined that the human body image reaches the target size: the number of horizontal pixels of the human body image is the same as that of the horizontal pixels of the target size, the number of longitudinal pixels of the human body image is the same as that of the longitudinal pixels of the target size, and the number of pixels included in a diagonal line of the human body image is the same as that of pixels included in a diagonal line of the rectangle characterized by the target size. It should be understood that the above conditions are merely exemplary, and in practice, other conditions may also be included.
In this embodiment, the moving unit 505 may move the enlarged image to be enlarged so that the display position of the enlarged human body image reaches the target position.
Specifically, the target position may be a position set in advance in an interface for displaying an image to be enlarged. For example, the target position may be a preset region in the interface, and the moving unit 505 may move the enlarged image to be enlarged so that the enlarged human body image is in the region, and then it is determined that the display position of the enlarged human body image reaches the target position. Alternatively, the target position may be the entire area of the interface, and when the enlarged human body image is in the interface, it is determined that the display position of the enlarged human body image reaches the target position. It should be noted that when all or part of the enlarged human body image (for example, half or other proportion of the area of the enlarged human body image) is in the region indicated by the target position, it may be determined that the display position of the enlarged human body image reaches the target position.
In some optional implementations of this embodiment, the first determining unit 502 may include: a detection module (not shown in the figure) configured to perform human skeleton key point detection on the image to be amplified to obtain a human skeleton key point information set; a first determining module (not shown in the figure) configured to determine a human body image based on the human body bone key point information set.
In some optional implementations of this embodiment, the first determining module may be further configured to: and determining an image for representing the upper half of the human body as a human body image based on the human body skeleton key point information set.
In some optional implementations of this embodiment, the amplifying unit 504 may be further configured to: and determining that the human body image reaches the target size in response to the fact that the magnification of the magnified image to be magnified reaches the preset magnification and the size of the magnified human body image is smaller than the preset size.
In some optional implementations of this embodiment, the mobile unit 505 may include: a second determining module (not shown in the figure) configured to determine a center point of the human body from the enlarged human body image; a moving module configured to move the image to be magnified so that the human body center point moves toward an interface center point of an interface for displaying the image to be magnified; a third determining module (not shown in the figure) configured to determine that the display position of the magnified human body image reaches the target position in response to determining that the human body center point coincides with the interface center point.
In some optional implementations of this embodiment, the mobile unit 505 may further include: a fourth determining module (not shown in the figures) configured to determine that the display position of the magnified image of the human body reaches the target position in response to determining that the edge of the magnified image to be magnified coincides with the edge of the interface
According to the device provided by the embodiment of the disclosure, the currently displayed image to be amplified is obtained, the human body image is determined from the image to be amplified, if the size of the human body image is smaller than the preset size, the image to be amplified is amplified to enable the human body image to reach the target size, and finally the amplified image to be amplified is moved to enable the display position of the amplified human body image to reach the target position, so that the human body image can be moved to the target position on the screen in real time without moving the shooting equipment, the lens automatic tracking can be simulated, and the flexibility of the display of the person image is improved.
Referring now to fig. 6, shown is a schematic block diagram of a terminal device 600 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 6, the terminal device 600 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 601 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the terminal apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, memory; and a communication device 609. The communication means 609 may allow the terminal device 600 to perform wireless or wired communication with other devices to exchange data. While fig. 6 illustrates a terminal apparatus 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In embodiments of the disclosure, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring a currently displayed image to be amplified; determining a human body image from the image to be amplified; determining whether the size of the human body image is smaller than a preset size; in response to the fact that the size of the image to be amplified is smaller than the preset size, amplifying the image to be amplified to enable the human body image to reach the target size; and moving the amplified image to be amplified so as to enable the display position of the amplified human body image to reach the target position.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first determination unit, a second determination unit, an amplification unit, and a moving unit. Here, the names of the units do not constitute a limitation to the unit itself in some cases, and for example, the acquiring unit may also be described as a "unit that acquires an image to be enlarged that is currently displayed".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.